I am using https://github.com/sciunto/python-bibtexparser as a library module for my code(installed with pip3). So, I don't have writing access to the module.
But, the set it is using is rather limited, as defined in standard_types in https://github.com/sciunto/python-bibtexparser/blob/master/bibtexparser/bibdatabase.py.
Is it possible, that, I redefine/append the set from the calling code? e.g.
#foo.py
import bibtexparser
#redefine standard_types
bibtexparser.STANDARD_TYPES = set([the complete set I need])
#or append the standard_types
bibtexparser.STANDARD_TYPES.update = set([append to the set])
?
Update: Actually I cant access the variable STANDARD_TYPES. I am trying to do:
from bibtexparser import bibdatabase as bd
class Window(Gtk.ApplicationWindow):
def __init__(self, application, giofile=None):
Gtk.ApplicationWindow.__init__(self,
application=application,
default_width=1000,
default_height=200,
border_width=5)
print(bd)
print(bd.STANDARD_TYPES)
Which is yielding:
<module 'bibtexparser.bibdatabase' from '/usr/lib/python3.5/site-packages/bibtexparser/bibdatabase.py'>
......
print(bd.STANDARD_TYPES)
AttributeError: module 'bibtexparser.bibdatabase' has no attribute 'STANDARD_TYPES'
It looks like other parts of the package perform from bibtextparser import STANDARD_TYPES, so you can't just replace it; the modules that have already imported it will keep the old version. Using update would work fine though you need to fix your syntax:
bibtexparser.STANDARD_TYPES.update({append to the set})
That said, it looks like BibTexParser can be initialized passing it ignore_nonstandard_types=True and it won't even check STANDARD_TYPES, which is probably what you really want; it doesn't involve hacky monkey-patching at all.
Related
I was trying to modify a function within a class. I was following the steps from this link. I want to understand why the changes are not working.
The function is:
def explain(self, test_df, row_index=None, row_num=None, class_id=None, bacckground_size=50, nsamples=500)
from module ktrain
I am trying to get the shape values themselves instead of the plot. My changes are in
def alternative_explain (self, test_df, row_index=None, row_num=None, class_id=None, background_size=50, nsamples=500)
Then I try:
import types
import ktrain
funcType = types.MethodType
predictor1 = TabularPredictor()
But get the error that "name 'TabularPredictor' is not defined. Simarly, I can not make a new, inherited class from TabularPredictor. What am I doing wrong?
Update: I did import ktrain
It sounds like you're a little confused about the python import statement, which has several alternative syntaxes.
Using import ktrain will only import a reference to the module ktrain in your code; if you want your code to refer to anything inside the ktrain module, you need to use dot-notation, e.g. ktrain.TabularPredictor(). Pros: everything from the ktrain module is now accessible from within your code. Cons: it might be a bit wordy to type out ktrain.TabularPredictor() every time you want to make an instance of the class, and you might only actually need one or two classes from that module.
Using from ktrain import TabularPredictor will make the TabularPredictor class accessible in your code's namespace, so there will be no need to use the dot-notation; you can just type TabularPredictor() when you want to create an instance. Pros: less wordy, you only import what you need (none of the other classes or functions from ktrain will be accessible from within your code). Cons: you might find out later on that some of the other classes/functions in the module are useful, which means you'll have to change your import statement. It can also be a pain to have to individually import 10 different classes from the same module.
You can read more here.
I am writing a module in python which has many functions to be used in a variety of situations, resulting in changing default values over time. (Currently this is a .py file I am importing) Many of the functions have the same hard-coded defaults. I would like to make these defaults configureable, ideally through some functionality that can be accessed via a jupyter notebook that has already imported the module or at the very least that can be set at the time of import or via a config file.
I know how to do this using other languages but have been struggling to do the same in python. I do not want defaults to be hardcoded. I know that some of the difficulty with this is that the module is only imported once, meaning variables inside the module are no longer accessable after the import is complete. If there is a way of looking at this more pythonically, I would accept an answer that explains why my desired solution is non-pythonic and what a good alternative would be.
For example here is what the functions would look like:
function1(arg1 = default_param1):
return arg1
function2(arg1 = default_param1, arg2 = default_param3):
other cool stuff
Here is what I would like to be able to do something similar to this:
import foo_module.py as foo
foo.function1()
foo.default_param1 = new_value
foo.function1()
==> arg1
==> new_value
Of course, with this setup you can always change the value input every time you call the function, but this is less than ideal.
In this case how would I change default_param1 accross the entire module via the code that is importing the module?
Edit: to clarify, this module would not be accessed via the command line. A primary use case is to import it into a jupyter notebook.
You could use environment variables such that, upon being imported, your module reads these variables and adjusts the defaults accordingly.
You could set the environment variables ahead of time using os.environ. So,
import os
os.environ['BLAH'] = '5'
import my_module
Inside my_module.py, you'd have something like
import os
try:
BLAH_DEFAULT = int(os.environ['BLAH'])
except:
BLAH_DEFAULT = 3
If you'd rather not fiddle with environment variables and you're okay with the defaults being mutable after importation, my_module.py could store the defaults in a global dict. E.g.
defaults = {
'BLAH': 3,
'FOO': 'bar',
'BAZ': True
}
Your user could update that dictionary manually (my_module.defaults['BAZ']=False) or, if that bothers you, you could hide the mechanics in a function:
def update_default(key,value):
if key not in defaults:
raise ValueError('{} is not a valid default parameter.'.format(key))
defaults[key]=value
You could spiff up that function by doing type/range checks on the passed value.
However, keep in mind that, unlike in languages like C++ and Java, nothing in Python is truly hidden. A user would be able to directly reference my_module.defaults thus bypassing your function.
Suppose I have two nearly identical versions of a python package mymod, ie mymod0 and mymod1. Each of these packages has files init.py and foo.py, and foo.py has a single function printme(). Calling mymod0.foo.printme() will print "I am mymod0" and calling mymod1.foo.printme() will print "I am mymod1". So far so good.
But now I need to dynamically import either mymod0 or mymod1. The user will input either 0 or 1 to the script (as variable "index"), and then I can create packageName="mymod"+str(index)
I tried this:
module=importlib.import_module(module_name)
module.foo.printme()
But I get this error:
AttributeError: 'module' object has no attribute 'foo'
How can I specify the the package should now be referred to as module so that module.foo.printme() will work?
UPDATE: So it looks like the easiest solution is to use the exec() function. This way I can dynamically create an import statement like this:
cmdname="from mymod%s import foo" % index
exec(cmdname)
Then:
foo.printme()
This seems to work.
How can I specify the the package should now be referred to as module so that module.foo.printme() will work?
You have to make sure <module_name>.__init__ imports foo into the module's namespace:
#./<module_name>/__init__.py
import foo
then you can
module=importlib.import_module(module_name)
module.foo.printme()
But now I need to dynamically import either mymod0 or mymod1.
Note this only works the first time because python caches loaded modules. If the module has changed since that start of the program, use the reload function function. Just a word of caution: There are several caveats associated with this, and it may end up not doing what you intended.
How can I recreate this dynamically?
for i in range(0,2):
module_name = 'mymod%s' % i
module=importlib.import_module(module_name)
module.foo.printme()
In python, if you need a module from a different package you have to import it. Coming from a Java background, that makes sense.
import foo.bar
What doesn't make sense though, is why do I need to use the full name whenever I want to use bar? If I wanted to use the full name, why do I need to import? Doesn't using the full name immediately describe which module I'm addressing?
It just seems a little redundant to have from foo import bar when that's what import foo.bar should be doing. Also a little vague why I had to import when I was going to use the full name.
The thing is, even though Python's import statement is designed to look similar to Java's, they do completely different things under the hood. As you know, in Java an import statement is really little more than a hint to the compiler. It basically sets up an alias for a fully qualified class name. For example, when you write
import java.util.Set;
it tells the compiler that throughout that file, when you write Set, you mean java.util.Set. And if you write s.add(o) where s is an object of type Set, the compiler (or rather, linker) goes out and finds the add method in Set.class and puts in a reference to it.
But in Python,
import util.set
(that is a made-up module, by the way) does something completely different. See, in Python, packages and modules are not just names, they're actual objects, and when you write util.set in your code, that instructs Python to access an object named util and look for an attribute on it named set. The job of Python's import statement is to create that object and attribute. The way it works is that the interpreter looks for a file named util/__init__.py, uses the code in it to define properties of an object, and binds that object to the name util. Similarly, the code in util/set.py is used to initialize an object which is bound to util.set. There's a function called __import__ which takes care of all of this, and in fact the statement import util.set is basically equivalent to
util = __import__('util.set')
The point is, when you import a Python module, what you get is an object corresponding to the top-level package, util. In order to get access to util.set you need to go through that, and that's why it seems like you need to use fully qualified names in Python.
There are ways to get around this, of course. Since all these things are objects, one simple approach is to just bind util.set to a simpler name, i.e. after the import statement, you can have
set = util.set
and from that point on you can just use set where you otherwise would have written util.set. (Of course this obscures the built-in set class, so I don't recommend actually using the name set.) Or, as mentioned in at least one other answer, you could write
from util import set
or
import util.set as set
This still imports the package util with the module set in it, but instead of creating a variable util in the current scope, it creates a variable set that refers to util.set. Behind the scenes, this works kind of like
_util = __import__('util', fromlist='set')
set = _util.set
del _util
in the former case, or
_util = __import__('util.set')
set = _util.set
del _util
in the latter (although both ways do essentially the same thing). This form is semantically more like what Java's import statement does: it defines an alias (set) to something that would ordinarily only be accessible by a fully qualified name (util.set).
You can shorten it, if you would like:
import foo.bar as whateveriwant
Using the full name prevents two packages with the same-named submodules from clobbering each other.
There is a module in the standard library called io:
In [84]: import io
In [85]: io
Out[85]: <module 'io' from '/usr/lib/python2.6/io.pyc'>
There is also a module in scipy called io:
In [95]: import scipy.io
In [96]: scipy.io
Out[96]: <module 'scipy.io' from '/usr/lib/python2.6/dist-packages/scipy/io/__init__.pyc'>
If you wanted to use both modules in the same script, then namespaces are a convenient way to distinguish the two.
In [97]: import this
The Zen of Python, by Tim Peters
...
Namespaces are one honking great idea -- let's do more of those!
in Python, importing doesn't just indicate you might use something. The import actually executes code at the module level. You can think of the import as being the moment where the functions are 'interpreted' and created. Any code that is in the _____init_____.py level or not inside a function or class definition happens then.
The import also makes an inexpensive copy of the whole module's namespace and puts it inside the namespace of the file / module / whatever where it is imported. An IDE then has a list of the functions you might be starting to type for command completion.
Part of the Python philosophy is explicit is better than implicit. Python could automatically import the first time you try to access something from a package, but that's not explicit.
I'm also guessing that package initialization would be much more difficult if the imports were automatic, as it wouldn't be done consistently in the code.
You're a bit confused about how Python imports work. (I was too when I first started.) In Python, you can't simply refer to something within a module by the full name, unlike in Java; you HAVE to import the module first, regardless of how you plan on referring to the imported item. Try typing math.sqrt(5) in the interpreter without importing math or math.sqrt first and see what happens.
Anyway... the reason import foo.bar has you required to use foo.bar instead of just bar is to prevent accidental namespace conflicts. For example, what if you do import foo.bar, and then import baz.bar?
You could, of course, choose to do import foo.bar as bar (i.e. aliasing), but if you're doing that you may as well just use from foo import bar. (EDIT: except when you want to import methods and variables. Then you have to use the from ... import ... syntax. This includes instances where you want to import a method or variable without aliasing, i.e. you can't simply do import foo.bar if bar is a method or variable.)
Other than in Java, in Python import foo.bar declares, that you are going to use the thing referred to by foo.bar.
This matches with Python's philosophy that explicit is better than implicit. There are more programming languages that make inter-module dependencies more explicit than Java, for example Ada.
Using the full name makes it possible to disambiguate definitions with the same name coming from different modules.
You don't have to use the full name. Try one of these
from foo import bar
import foo.bar as bar
import foo.bar
bar = foo.bar
from foo import *
A few reasons why explicit imports are good:
They help signal to humans and tools what packages your module depends on.
They avoid the overhead of dynamically determining which packages have to be loaded (and possibly compiled) at run time.
They (along with sys.path) unambiguously distinguish symbols with conflicting names from different namespaces.
They give the programmer some control of what enters the namespace within which he is working.
Is it only possible if I rename the file? Or is there a __module__ variable to the file to define what's its name?
If you really want to import the file 'oldname.py' with the statement 'import newname', there is a trick that makes it possible: Import the module somewhere with the old name, then inject it into sys.modules with the new name. Subsequent import statements will also find it under the new name. Code sample:
# this is in file 'oldname.py'
...module code...
Usage:
# inject the 'oldname' module with a new name
import oldname
import sys
sys.modules['newname'] = oldname
Now you can everywhere your module with import newname.
You can change the name used for a module when importing by using as:
import foo as bar
print bar.baz
Yes, you should rename the file. Best would be after you have done that to remove the oldname.pyc and oldname.pyo compiled files (if present) from your system, otherwise the module will be importable under the old name too.
When you do import module_name the Python interpreter looks for a file module_name.extension in PYTHONPATH. So there's no chaging that name without changing name of the file. But of course you can do:
import module_name as new_module_name
or even
import module_name.submodule.subsubmodule as short_name
Useful eg. for writing DB code.
import sqlite3 as sql
sql.whatever..
And then to switch eg. sqlite3 to pysqlite you just change the import line
Every class has an __module__ property, although I believe changing this will not change the namespace of the Class.
If it is possible, it would probably involve using setattr to insert the methods or class into the desired module, although you run the risk of making your code very confusing to your future peers.
Your best bet is to rename the file.
Where would you like to have this __module__ variable, so your original script knows what to import? Modules are recognized by file names and looked in paths defined in sys.path variable.
So, you have to rename the file, then remove the oldname.pyc, just to make sure everything works right.
I had an issue like this with bsddb. I was forced to install the bsddb3 module but hundreds of scripts imported bsddb. Instead of changing the import in all of them, I extracted the bsddb3 egg, and created a soft link in the site-packages directory so that both "bsddb" and "bsddb3" were one in the same to python.
You can set the module via module attribute like below.
func.__module__ = module
You can even create a decorator to change the module names for specific functions in a file for example:
def set_module(module):
"""Decorator for overriding __module__ on a function or class.
Example usage::
#set_module('numpy')
def example():
pass
assert example.__module__ == 'numpy'
"""
def decorator(func):
if module is not None:
func.__module__ = module
return func
return
and then use
#set_module('my_module')
def some_func(...):
Pay attention since this decorator is for changing individual
module names for functions.
This example is taken from numpy source code: https://github.com/numpy/numpy/blob/0721406ede8b983b8689d8b70556499fc2aea28a/numpy/core/numeric.py#L289