I've been struggling with creating a class for my image processing code in Python.
The code requires a whole bunch of different parameters (set in a params.txt file) which can easily be grouped into different categories. For example, some are paths, some are related to the experimental geometry, some are just switches for turning certain image processing features on/off etc etc.
If my "structure" (not sure how I should create it yet) is created as P, I would like to have something like,
P = my_param_object()
P.load_parameters('path/to/params.txt')
and then, from the main code, I can access whatever elements I need like so,
print(P.paths.basepath())
'/some/path/to/data'
print(P.paths.year())
2019
print(P.filenames.lightfield())
'andor_lightfield.dat'
print(P.geometry.dist_lens_to_sample())
1.5
print(P.settings.subtract_background())
False
print(P.settings.apply_threshold())
True
I already tried creating my own class to do this but everything is just in one massive block. I don't know how to create nested parts for the class. So for example, I have a setting and a function called "load_background". This makes sense because the load_background function always loads a specific filename in a specific location but should only do so if the load_background parameter is set to True
From within the class, I tried doing something like
self.setting_load_background = False
def method_load_background(self):
myutils.load_dat(self.background_fname())
but that's very ugly. It would be nicer to have,
if P.settings.load_background() == True:
P.load_background()
else:
P.generate_random_background()
Related
An Example code (which does not work)
#pytest.fixture(scope="function")
def my_method_user(db):
user = User(name="method_user").save()
return user
#pytest.fixture(scope="class")
def my_class_user(db):
user = User(name="class_user").save()
return user
#pytest.mark.usefixtures("my_class_user")
class TestClass:
def test_user_count(self):
assert len(User.objects.all()) == 1
def test_user_count_2(self, my_method_user):
assert len(User.objects.all()) == 2
However, doing this (or anything similar where I tried to access DB directly through queryset etc) basically always gives me ScopeMistmatch.
Basically, what I'm trying to do is
Create a User that will be "globally" used within the class - fixture with class scope (or maybe rather, some rows that were "setup" in prior to performing a test for class)
In some of the test functions, I would like a user to be created temporarily (but with a certain condition, thus I hope to use fixture). When the function tested is done, the user is automatically removed.
Later on, the datas might be more nested than simple class-function, the entire 5 layers might be all used. So rows that were created by class-scoped fixtures should also be deleted after, but that were created by module/package/session should stay
I've researched about using django_db_blocker.unblock() trying to avoid scopemismatch and keep datas throughout multiple tests, however it gives me a problem where the datas are not being removed after the tests are done (unless I remove them manually, which may be quite inaccurate).
Am i fundamentally wrong about how I'm supposed to use pytest/fixture? or am I missing something? Is there a different way to achieve this?
To provide a bit of context, I am building a risk model that pulls data from various different sources. Initially I wrote the model as a single function that when executed read in the different data sources as pandas.DataFrame objects and used those objects when necessary. As the model grew in complexity, it quickly became unreadable and I found myself copy an pasting blocks of code often.
To cleanup the code I decided to make a class that when initialized reads, cleans and parses the data. Initialization takes about a minute to run and builds my model in its entirety.
The class also has some additional functionality. There is a generate_email method that sends an email with details about high risk factors and another method append_history that point-in-times the risk model and saves it so I can run time comparisons.
The thing about these two additional methods is that I cannot imagine a scenario where I would call them without first re-calibrating my risk model. So I have considered calling them in init() like my other methods. I haven't only because I am trying to justify having a class in the first place.
I am consulting this community because my project structure feels clunky and awkward. I am inclined to believe that I should not be using a class at all. Is it frowned upon to create classes merely for the purpose of organization? Also, is it bad practice to call instance methods (that take upwards of a minute to run) within init()?
Ultimately, I am looking for reassurance or a better code structure. Any help would be greatly appreciated.
Here is some pseudo code showing my project structure:
class RiskModel:
def __init__(self, data_path_a, data_path_b):
self.data_path_a = data_path_a
self.data_path_b = data_path_b
self.historical_data = None
self.raw_data = None
self.lookup_table = None
self._read_in_data()
self.risk_breakdown = None
self._generate_risk_breakdown()
self.risk_summary = None
self.generate_risk_summary()
def _read_in_data(self):
# read in a .csv
self.historical_data = pd.read_csv(self.data_path_a)
# read an excel file containing many sheets into an ordered dictionary
self.raw_data = pd.read_excel(self.data_path_b, sheet_name=None)
# store a specific sheet from the excel file that is used by most of
# my class's methods
self.lookup_table = self.raw_data["Lookup"]
def _generate_risk_breakdown(self):
'''
A function that creates a DataFrame from self.historical_data,
self.raw_data, and self.lookup_table and stores it in
self.risk_breakdown
'''
self.risk_breakdown = some_dataframe
def _generate_risk_summary(self):
'''
A function that creates a DataFrame from self.lookup_table and
self.risk_breakdown and stores it in self.risk_summary
'''
self.risk_summary = some_dataframe
def generate_email(self, recipient):
'''
A function that sends an email with details about high risk factors
'''
if __name__ == "__main__":
risk_model = RiskModel(data_path_a, data_path_b)
risk_model.generate_email(recipient#generic.com)
In my opinion it is a good way to organize your project, especially since you mentioned the high rate of re-usability of parts of the code.
One thing though, I wouldn't put the _read_in_data, _generate_risk_breakdown and _generate_risk_summary methods inside __init__, but instead let the user call this methods after initializing the RiskModel class instance.
This way the user would be able to read in data from a different path or only to generate the risk breakdown or summary, without reading in the data once again.
Something like this:
my_risk_model = RiskModel()
my_risk_model.read_in_data(path_a, path_b)
my_risk_model.generate_risk_breakdown(parameters)
my_risk_model.generate_risk_summary(other_parameters)
If there is an issue of user calling these methods in an order which would break the logical chain, you could throw an exception if generate_risk_breakdown or generate_risk_summary are called before read_in_data. Of course you could only move the generate... methods out, leaving the data import inside __init__.
To advocate more on exposing the generate... methods out of __init__, consider a case scenario, where you would like to generate multiple risk summaries, changing various parameters. It would make sense, not to create the RiskModel every time and read the same data, but instead change the input to generate_risk_summary method:
my_risk_model = RiskModel()
my_risk_model.read_in_data(path_a, path_b)
for parameter in [50, 60, 80]:
my_risk_model.generate_risk_summary(parameter)
my_risk_model.generate_email('test#gmail.com')
this question is about blender, python scripting
I'm completely new in this, so please excuse me for any stupid/newbie question/comment.
I made it simple (3 lines code) to make it easy addressing the problem.
what I need is a code that adds a new uv map for each object within loop function.
But this code instead is adding multiple new UV maps to only one object.
import bpy
for x in bpy.context.selected_objects:
bpy.ops.mesh.uv_texture_add()
what's wrong I'm doing here??
Thanks
Similar to what Sambler said, I always use:
for active in bpy.context.selected_objects:
bpy.context.scene.objects.active = active
...
These two lines I use more than any other when programming for Blender (except import bpy of course).
I think I first learned this here if you'd like a good intro on how this works:
https://cgcookiemarkets.com/2014/12/11/writing-first-blender-script/
In the article he uses:
# Create a list of all the selected objects
selected = bpy.context.selected_objects
# Iterate through all selected objects
for obj in selected:
bpy.context.scene.objects.active = obj
...
His comments explain it pretty well, but I will take it a step further. As you know, Blender lacks built-in multi-object editing, so you have selected objects and one active object. The active object is what you can and will edit if you try to set its values from python or Blender's gui itself. So although we are writing it slightly differently each time, the effect is the same. We loop over all selected objects with the for active in bpy.context.selected_objects, then we set the active object to be the next one in the loop that iterates over all the objects that are selected with bpy.context.scene.objects.active = active. As a result, whatever we do in the loop gets done once for every object in the selection and any operation we do on the object in question gets done on all of the objects. What would happen if we only used the first line and put our code in the for loop?
for active in bpy.context.selected_objects:
...
Whatever we do in the loop gets done once for every object in the selection but any operation we do on the object in question gets done on only the active object, but as many times as there are selected objects. This is why we need to set the active object from within the loop.
The uv_texture_add operator is one that only works on the current active object. You can change the active object by setting scene.objects.active
import bpy
for x in bpy.context.selected_objects:
bpy.context.scene.objects.active = x
bpy.ops.mesh.uv_texture_add()
note: I am not really familiar with blender
It seems that bpy.ops operations depend on the state of bpy.context. The context can also be overridden per-operation.
I assume that uv_texture_add() only works on a single object at a time?
Try something like this:
import bpy
for x in bpy.context.selected_objects:
override = { "selected_objects": x }
bpy.ops.mesh.uv_texture_add(override)
That should run the operations as if only one object was selected at a time.
Source:
https://www.blender.org/api/blender_python_api_2_63_17/bpy.ops.html#overriding-context
I have a simple event handler that looks for what has actually been changed (it's registered for a IObjectModifiedEvent events), the code looks like:
def on_change_do_something(obj, event):
modified = False
# check if the publication has changed
for change in event.descriptions:
if change.interface == IPublication:
modified = True
break
if modified:
# do something
So my question is: how can I programmatically generate those descriptions? I'm using plone.app.dexterity everywhere, so z3c.form is doing that automagically when using a form, but I want to test it with a unittest.
event.description is nominally an IModificationDescription object, which is essentially a list of IAttributes objects: each Attributes object having an interface (e.g. schema) and attributes (e.g. list of field names) modified.
Simplest solution is to create a zope.lifecycleevent.Attributes object for each field changed, and pass as arguments to the event constructor -- example:
# imports elided...
changelog = [
Attributes(IFoo, 'some_fieldname_here'),
Attributes(IMyBehaviorHere, 'some_behavior_provided_fieldname_here',
]
notify(ObjectModifiedEvent(context, *changelog)
I may also misunderstood something, but you may simple fire the event in your code, with the same parameters like z3c.form (Similar to the comment from #keul)?
After a short search in a Plone 4.3.x, I found this in z3c.form.form:
def applyChanges(self, data):
content = self.getContent()
changes = applyChanges(self, content, data)
# ``changes`` is a dictionary; if empty, there were no changes
if changes:
# Construct change-descriptions for the object-modified event
descriptions = []
for interface, names in changes.items():
descriptions.append(
zope.lifecycleevent.Attributes(interface, *names))
# Send out a detailed object-modified event
zope.event.notify(
zope.lifecycleevent.ObjectModifiedEvent(content, *descriptions))
return changes
You need two testcases, one which does nothing and one which goes thru your code.
applyChanges is in the same module (z3c.form.form) it iterates over the form fields and computes a dict with all changes.
You should set a break point there to inspect how the dict is build.
Afterwards you can do the same in your test case.
This way you can write readable test cases.
def test_do_something_in_event(self)
content = self.get_my_content()
descriptions = self.get_event_descriptions()
zope.event.notify(zope.lifecycleevent.ObjectModifiedEvent(content, *descriptions))
self.assertSomething(...)
IMHO mocking whole logic away may be a bad idea for future, if the code changes and probably works completely different, your test will be still fine.
The desire is for the user to instantiate a class that represents the transeint along with automatic access to a member item for each variable being represented (up to 200 variables). The set of variable class instances would be dynamic based on file based input data and the desire is to use the file provided variable names to create a collection of these variable instances that are accessible with a natural naming scheme. Effectively, the variable class hides the details of where the data is stored and the indepedent variable (ie, time) is stored. The following pseudo code expresses random lines that the end user may express. In some cases, the post processing may be much more extensive.
tran1 = CTransient('TranData', ...)
Padj = tran1.pressPipe1 + 10 # add 10 bar to a pressure for conservatism
Tsat = TsatRoutine( tran1.tempPipe1 )
MyPlotRoutine( tran1.tempPipe1, tran1.tempPipe2 )
where pressPipeX and tempPipeX names defined in the input data files and the corresponding numpy data vectors are specified in the 'TranData' file input file and are instances of a CVariable class.
Help on how to dynamically build the set of instances that represent the transient variables such that they can be accessed would be appreciated.
Your description of what you're trying to do isn't entirely clear, but automatically naming variables something1, something2, etc. are generally a bad idea. Use a list instead:
transientvariables = []
transientvariables.append(makenewtransientvariable())
# ...
for tv in transientvariables:
print tv
Edit: OK, I think I see what you're getting at, although your explanation still isn't exactly easy to read. You have a collection of pipes, with a time series of temperature and pressure recorded for each one, right?
The easiest way would be to use a dictionary:
transients["tempPipe1"]
Or nested dictionaries:
transients["temp"]["Pipe1"]
Or you could override your class' __getattr__ method, so that it looks in a dictionary, and you can do:
transients.tempPipe1
Edit 2: Overriding __getattr__ would look a bit like this:
def __getattr__(self, name):
if name in self.varMap:
return self.varMap[name]
raise AttributeError