In Odoo 8, I want to do several actions by Python code, pressing only one button:
Create an invoice from a sale order
Validate that invoice
Register payment of that invoice
If I see the code for validating an invoice, when I press "Validate", I see that Odoo calls invoice_validate() method. But if I only call this method, it isn't generate internal number, and other things. Only following this steps work: Odoo validate invoice from code .
So, where can I see the complete list of methods Odoo call when I press a button? Thanks!
Methods can be called from various locations. And many methods rely on the fact that other methods have already run before them. Here's some guidelines that I've found useful when working on similar tasks.
Methods can be run by:
Buttons when clicked
Onchange methods defined in View XML (old api).
Workflows
Usually the first methods that would run are the ones defined in the view. Go through them and study what they are doing and if you will have to call them as well. Sometimes they perform various important calculations/data population.
Once the button gets clicked, it can call a method, or simply transition the workflow, or the called method might transition the workflow, etc. So it is important that you check what the corresponding workflow does as well.
It usually helps to study what is the input and output of the methods involved, including what gets passed around in the context, to get a better picture what is the path to automating the various steps.
Each module can be different, depending on its developers' programming approach, the api features that were available at its time of implementation, etc. So it will take some effort to understand the sequence of events for your specific case. Still, once you go through this a couple of times it gets much more intuitive.
To answer your question - the complete list of methods is in the code. You can use various dev/debug tools (your browser's dev console for one) to help you on the way.
Related
As a self learner I've found the kivy binding action (mainly in python) a bit confusing. I've seen in many source code and/or examples in kivy that some of the properties are bounded to a callback in the constructor, while others are not (though used in those callback). My queries are:
Do I need to bind every created (or, custom) or provided (by kivy) properties to a callback (whenever needed to keep updated) ?
In any case which is the better option, bind or fbind ?
Does scheduling a callback have any advantages (particularly during widget creation)?
My corresponding assumptions are:
Only bind those properties that are inter-related or inter-dependent along with those used for canvas drawing. Rest will be managed or updated internally by kivy.
Almost results in the same thing (except depends on the implicit or explicit naming)
As per Kivy-doc widget creation is relatively slow, that's why scheduling is necessary.
(This is a re-post of a topic that I posted earlier with the same title but got no response from the community.)
I have not found any descriptive, informative article on this so far.
I need your proper guidance, thank you.
I'm very new to wxpython, so this is probably an obvious question.
Let's say I wanted to create a program like an installation wizard which, when next is clicked, destroys the current set of widgets and creates a new set. However, the user must also be able to go back to a previous page.
Would I need classes for each page with their own __init__, could I just use a normal function, or is there a better way to do this?
There are a couple of valid approaches. The most obvious would be to just use wxPython's built-in wizard which can be found in wx.wizard. You might want to take a look at the documentation, the wiki or the wxPython demo for examples. You might also find this tutorial helpful:
http://www.blog.pythonlibrary.org/2011/01/27/wxpython-a-wizard-tutorial/
Of course the wizard is a bit limited, so if you needed something that gives you more flexibility, then you might want to look at rolling your own wizard. If you went with that approach, then yes, you would probably benefit from creating a base Page class that holds your assortment of widgets. Then as each page is shown, you can either Hide the previous page or Destroy it. Personally, I would just go with hiding it unless you had a lot of widgets per page.
This tutorial might help you get started down this path:
http://www.blog.pythonlibrary.org/2012/07/12/wxpython-how-to-create-a-generic-wizard/
This is a followup to this thread.
I have implemented the method for keeping a reference to all my forms in a array like mentioned by the selected answer in this but unfortunately I am getting a memory leak on each request to the django host.
The code in question is as follows:
This is my custom form I am extending which has a function to keep reference of neighboring forms, and whenever I instantiate a new form, it just keeps getting added to the _instances stack.
class StepForm(ModelForm):
TAGS = []
_instances = []
def __new__(cls, *args, **kwargs):
instance = object.__new__(cls)
cls._instances.append(instance)
return instance
Even though this more of a python issue then Django I have decided that it's better to show you the full context that I am encountering this problem at.
As requested I am posting what I am trying to accomplish with this feat:
I have a js applet with steps, and for each step there is a form, but in order to load the contents of each step dynamically through JS I need to execute some calls on the next form in line. And on the previous aswell. Therefore the only solution I Could come up with is to just keep a reference to all forms on each request and just use the forms functions I need.
Well it's not only a Python issue - the execution context (here a Django app) is important too. As Ludwik Trammer rightly comments, you're in a long running process, and as such anything at the module or class level will live for the duration of the process. Also if using more than one process to serve the app you may (and will) get inconsistant results from one request to another, since two subsequent requests from a same user can (and will) end up being served by different processes.
To make a long story short: the way to safely keep per-user persistant state in a web application is to use sessions. Please explain what problem you're trying to solve, there's very probably a more appropriate (and possibly existing and tested) solution.
EDIT : ok what you're looking for is a "wizard". There are a couple available implementations for Django but most of them don't handle going back - which, from experience, can get tricky when each step depends on the previous one (and that's one of the driving points for using a wizard). What one usually do is have a `Wizard' class (plain old Python object) with a set of forms.
The wizard takes care of
step to step navigation
instanciating forms
maintaining state (which includes storing and retrieving form's data for each step, revalidating etc).
FWIW I've had rather mixed success using Django's existing session-based wizard. We rolled our own for another project (with somehow complex requirements) and while it works I wouldn't name it a success neither. Having ajax and file uploads thrown in the mix doesn't help neither. Anyway, you can try to start with an existing implementation, see how it fits your needs, and go for a custom solution if it doesn't - generic solutions sometimes make things harder than they have to be.
My 2 cents...
The leak is not just a side effect of your code - it's part of its core function. It is not possible to remove the leak without changing what the code does.
It does exactly what it is programmed to do - every time the form is displayed a new instance is created and added to the _instances list. It is never removed from the list. As a consequence after 100 requests you will have a list with 100 requests, after 1 000 requests there will be 1 000 instances in the list, and so on - until all memory is exhausted and the program crashes.
What did you want to accomplish by keeping all instances of your form? And what else did you expect to happen?
I'm trying to develop a system that will allow users to update local, offline databases on their laptops and, upon reconnection to the network, synchronize their dbs with the main, master db.
I looked at MySQL replication, but that documentation focuses on unidirectional syncing. So I think I'm going to build a custom app in python for doing this (bilateral syncing), and I have a couple of questions.
I've read a couple of posts regarding this issue, and one of the items which has been passively mentioned is serialization (which I would be implementing through the pickle and cPickle modules in python). Could someone please tell me whether this is necessary, and the advantages of serializing data in the context of syncing databases?
One of the uses in wikipedia's entry on serialization states it can be used as "a method for detecting changes in time-varying data." This sounds really important, because my application will be looking at timestamps to determine which records have precedence when updating the master database. So, I guess the thing I don't really get is how pickling data in python can be used to "detect changes in time-varying data", and whether or not this would supplement using timestamps in the database to determine precedence or replace this method entirely.
Anyways, high level explanations or code examples are both welcome. I'm just trying to figure this out.
Thanks
how pickling data in python can be used to "detect changes in time-varying data."
Bundling data in an opaque format tells you absolutely nothing about time-varying data, except that it might have possibly changed (but you'd need to check that manually by unwrapping it). What the article is actually saying is...
To quote the actual relevant section (link to article at this moment in time):
Since both serializing and deserializing can be driven from common code, (for example, the Serialize function in Microsoft Foundation Classes) it is possible for the common code to do both at the same time, and thus 1) detect differences between the objects being serialized and their prior copies, and 2) provide the input for the next such detection. It is not necessary to actually build the prior copy, since differences can be detected "on the fly". This is a way to understand the technique called differential execution[a link which does not exist]. It is useful in the programming of user interfaces whose contents are time-varying — graphical objects can be created, removed, altered, or made to handle input events without necessarily having to write separate code to do those things.
The term "differential execution" seems to be a neologism coined by this person, where he described it in another StackOverflow answer: How does differential execution work?. Reading over that answer, I think I understand what he's trying to say. He seems to be using "differential execution" as a MVC-style concept, in the context where you have lots of view widgets (think a webpage) and you want to allow incremental changes to update just those elements, without forcing a global redraw of the screen. I would not call this "serialization" in the classic sense of the word (not by any stretch, in my humble opinion), but rather "keeping track of the past" or something like that. Because this basically has nothing to do with serialization, the rest of this answer (my interpretation of what he is describing) is probably not worth your time unless you are interested in the topic.
In general, avoiding a global redraw is impossible. Global redraws must sometimes happen: for example in HTML, if you increase the size of an element, you need to reflow lower elements, triggering a repaint. In 3D, you need to redraw everything behind what you update. However if you follow this technique, you can reduce (though not minimize) the number of redraws. This technique he claims will avoid the use of most events, avoid OOP, and use only imperative procedures and macros. My interpretation goes as follows:
Your drawing functions must know, somehow, how to "erase" themselves and anything they do which may affect the display of unrelated functions.
Write a sideffect-free paintEverything() script that imperatively displays everything (e.g. using functions like paintButton() and paintLabel()), using nothing but IF macros/functions. The IF macro works just like an if-statement, except...
Whenever you encounter an IF branch, keep track of both which IF statement this was, and the branch you took. "Which IF statement this was" is sort of a vague concept. For example you might decide to implement a FOR loop by combining IFs with recursion, in which case I think you'd need to keep track of the IF statement as a tree (whose nodes are either function calls or IF statements). You ensure the structure of that tree corresponds to the precedence rule "child layout choices depend on this layout choice".
Every time a user input event happens, rerun your paintEverything() script. However because we have kept track of which part of the code depends on which other parts, we can automatically skip anything which did not depend on what was updated. For example if paintLabel() did not depend on the state of the button, we can avoid rerunning that part of the paintEverything() script.
The "serialization" (not really serialization, more like naturally-serialized data structure) comes from the execution history of the if-branches. Except, serialization here is not necessary at all; all you needed was to keep track of which part of the display code depends on which others. It just so happens that if you use this technique with serially-executed "smart-if"-statements, it makes sense to use a lazily-evaluated diff of execution history to determine what you need to update.
However this technique does have useful takeaways. I'd say the main takeaway is: it is also a reasonable thing to keep track of dependencies not just in an OOP-style (e.g. not just widget A depends on widget B), but dependencies of the basic combinators in whatever DSL you are programming in. Also dependencies can be inferred from the structure of your program (e.g. like HTML does).
I'm wondering how to go about implementing a macro recorder for a python gui (probably PyQt, but ideally agnostic). Something much like in Excel but instead of getting VB macros, it would create python code. Previously I made something for Tkinter where all callbacks pass through a single class that logged actions. Unfortunately my class doing the logging was a bit ugly and I'm looking for a nicer one. While this did make a nice separation of the gui from the rest of the code, it seems to be unusual in terms of the usual signals/slots wiring. Is there a better way?
The intention is that a user can work their way through a data analysis procedure in a graphical interface, seeing the effect of their decisions. Later the recorded procedure could be applied to other data with minor modification and without needing the start up the gui.
You could apply the command design pattern: when your user executes an action, generate a command that represents the changes required. You then implement some sort of command pipeline that executes the commands themselves, most likely just calling the methods you already have. Once the commands are executed, you can serialize them or take note of them the way you want and load the series of commands when you need to re-execute the procedure.
Thinking in high level, this is what I'd do:
Develop a decorator function, with which I'd decorate every event-handling functions.
This decorator functions would take note of thee function called, and its parameters (and possibly returning values) in a unified data-structure - taking care, on this data structure, to mark Widget and Control instances as a special type of object. That is because in other runs these widgets won't be the same instances - ah, you can't even serialize a toolkit widget instances, be it Qt or otherwise.
When the time comes to play a macro, you fill-in the gaps replacing the widget-representating object with the instances of the actually running objects, and simply call the original functions with the remaining parameters.
In toolkits that have an specialized "event" parameter that is passed down to event-handling functions, you will have to take care of serializing and de-serializing this event as well.
I hope this can help. I could come up with some proof of concept code for that (although I am in a mood to use tkinter today - would have to read a lot to come up with a Qt4 example).
An example of what you're looking for is in mayavi2. For your purposes, mayavi2's "script record" functionality will generate a Python script that can then be trivially modified for other cases. I hear that it works pretty well.