Trigger Cloud Function based on Firestore changes (Python) - python

Whenever there is change(created, modified, removed) in any document for a particular collection in Firestore, i want to capture the delta and publish it to pubsub.
I am trying all this in python.
Code Snippet :
ref = db.collection('country').document(city_id)
ref.update(body_json)
ref.on_snapshot(callback)
def callback(col_snapshot, changes, read_time):
for change in changes:
if change.type.name == 'ADDED':
print('ADDED')
elif change.type.name == 'MODIFIED':
print('MODIFIED')
elif change.type.name == 'REMOVED':
print('REMOVED')
print('end of callback')
Now, when i make changes in the firestore document, like :
Add new document - I get the print as ADDED and end of callback.
Modify existing document - I mostly get the print as ADDED but sometimes or rather once in 10 runs i get both ADDED and MODIFIED. followed by end of callback.
Remove any document - end of callback. Sometimes(rarely) both REMOVED and end of callback.
I am unable to understand this behaviour and don't know how to deal with this unable print executions.

The limitation is this is Node.js only, so Javascript only. I think it still applies and is worth you looking into, so sharing it here as despite your question being Python based it's the best option.
I think Cloud Functions would be great for you here. You can set triggers that will run functions on each of the things you've noted.
Added can be handled by onCreate.
Modified can be handled by onUpdate.
Removed - by onDelete.
You basically set up functions that look like this:
exports.updateUser = functions.firestore
.document('users/{userId}')
.onUpdate((change, context) => {
// Get an object representing the document
// e.g. {'name': 'Marie', 'age': 66}
const newValue = change.after.data();
// ...or the previous value before this update
const previousValue = change.before.data();
// access a particular field as you would any JS property
const name = newValue.name;
// perform desired operations ...
});
You're given the before and after states of the document for the example above, so you'll be able to not just know it was changed but what was changed.
The example code was straight from the documentation for Cloud Functions here.

Related

Tableau API for Python, Get Parameter Value -- JS's .getParametersAsync() Equivalent for Python

I want to extract a workbook's currently set parameter values.
I've found reference of the desired request, as someone uses it in JS, but I can't find it anywhere in the REST API documentation, nor the documentation for tableau-api-lib nor tableauserverclient. Tableau API: Get parameter value
I can query workbooks just fine using either of the above referenced libraries, but is there a method I'm missing somewhere to get the parameter values?
Ideally I'd like to be able to modify them before a query, but getting what they're currently set at would be a nice start.
Javascript equivalent:
paramObjs = currentViz.getWorkbook().getParametersAsync();
paramObjs.then(function(paramObjs) {
for (var i = 0; i < paramObjs.length; i++) {
try {
var name = paramObjs[i].getName();
var value = paramObjs[i].getCurrentValue();
params[name] = value.value;
} catch (e) { }
}
});
From what it looks like, this task can not be done using Python.
The Workbook class here does not have any attributes/methods to get parameters
Tableau's REST API reference has no method/endpoint for that
Other API's like the Metadata and Hyper APIs have no connection/correlation with the task
A list of all Tableau packages/libraries is given here. Going through all Python libraries, there wasn't any method to fetch the parameters.
In JavaScript, they are actually creating visualizations, which allows them to query the visualization itself (currentViz.getWorkbook()). The same isn't applicable for Python I guess, which explains the missing support/APIs.

how to fill a slot in alexa backend using python ask sdk

I need to fill the slot value of intent depending on some conditions.
I referred to the following documentation.
https://developer.amazon.com/en-US/docs/alexa/custom-skills/delegate-dialog-to-alexa.html#node_delegate_default_values_example
In this document, they do something like this.
// fromCity.value is empty if the user has not filled the slot. In this example,
// getUserDefaultCity() retrieves the user's default city from persistent storage.
if (!fromCity.value) {
currentIntent.slots.fromCity.value = getUserDefaultCity();
}
Similarly, I want to know to do this using the python ASK SDK. also how to return something similar to this?
// Return the Dialog.Delegate directive
return handlerInput.responseBuilder
.addDelegateDirective(currentIntent)
.getResponse();
thanks in advance!!
I finally found the solution for this.
from ask_sdk_model.dialog import delegate_directive
updateIntent = {
'name' : intent_name,
'confirmation_status' : 'NONE',
'slots' : handler_input.request_envelope.request.intent.slots
}
return handler_input.response_builder.add_directive(delegate_directive.DelegateDirective(updated_intent = updateIntent)).response

Is it more efficient to use function args after branching (Python)?

I have a function that takes several arguments, one of which is a contact number. The data provided to the function is used to generate documents, and if one option is selected, that document is immediately returned inline, where the other option takes the contact number and generates an email. In the original version of this function, the contact number was immediately parsed at the start of the function, but I moved it into the else block as that is where the email is actually generated that uses that contact number and I saw no reason to create a new variable if it was not used half of the time. An example of this is below, and is built in Python using the Django framework:
def function(request, object, number=None):
obj = ObjectItem.objects.get(id=object)
# Originally number processed here
if request.method == 'POST':
if 'inline' in request.POST:
data = {
'object': obj,
}
return generate_document(data, inline=True)
else:
if number:
contact = '{}'.format(number)
else:
contact = obj.contact
data = {
'object': obj,
}
document = generate_document(data, inline=False)
return message(document, contact)
else:
return redirect()
While looking at my code, I realize that I could move the data dict creation outside of the processing for the inline vs no inline in the POST, but I do not know if moving the processing of the number argument into the else block in that processing actually saves any time or is the more standard way of doing things. I know that as Python is a scripting language, there is not any kind of optimizations that would be performed automatically like they would rearranging that kind of declaration in a compiled language, so I am looking for the most efficient way of doing this.
From a performance perspective, it makes no difference whether you create data above the if or in the if. Python will only hit the line once and the dict will only be created once. But you should move it above the if for design reasons.
First, don't repeat yourself - if you can reasonably implement a bit of code in one place, don't sprinkle it around your code. Suppose you decide a defaultdict is better later, you only have to change it in one place.
Second, placement implies intent. If you put it above your if you've made a statement that you plan to use that data structure everywhere. In your current code, readers will ask the same question you do... why wasn't that above the if? Its kinda trivial but the reading of the code shouldn't raise more questions.

How do I specify the compare function of the scoreboard in Cocotb?

I want to extend the Endian Swapper example of Cocotb, so that, it also checks the contents of the packages outputted by the device under test (DUT). In the provided example code, the model function which generates the expected output appends the unmodified input transaction to the list of expected outputs. This list is given as a parameter to the scoreboard.
To understand how the scoreboarding works and why the model function did not append the byte-swapped transactions, I introduced a design error in the DUT. Within the following code block of endian_swapper.vhdl
if (byteswapping = '0') then
stream_out_data <= stream_in_data;
else
stream_out_data <= byteswap(stream_in_data);
end if;
I just inverted the if condition in the first line to: (byteswapping /= '0').
After re-running the testbench, I would have expected that the test fails, but it still passes:
# 62345.03ns INFO cocotb.regression regression.py:209 in handle_result Test Passed: wavedrom_test
# 62345.03ns INFO cocotb.regression regression.py:170 in tear_down Passed 33 tests (0 skipped)
# 62345.03ns INFO cocotb.regression regression.py:176 in tear_down Shutting down...
It seems, that the compare function is missing in the creation of the scoreboard:
self.scoreboard = Scoreboard(dut)
self.scoreboard.add_interface(self.stream_out, self.expected_output)
It should have a third parameter in the call of add_interface, but this parameter is undocumented.
So, how do I specify this compare function, so that, also the package content is checked?
I am using QuestaSim for simulation and executed the testbench with make SIM=questa. I also clean-upped the build directory between the runs.
If I apply the following diff when using Icarus the tests fail as expected:
diff --git a/examples/endian_swapper/hdl/endian_swapper.sv b/examples/endian_swapper/hdl/endian_swapper.sv
index 810d3b7..a85db0d 100644
--- a/examples/endian_swapper/hdl/endian_swapper.sv
+++ b/examples/endian_swapper/hdl/endian_swapper.sv
## -119,7 +119,7 ## always #(posedge clk or negedge reset_n) begin
stream_out_startofpacket <= stream_in_startofpacket;
stream_out_endofpacket <= stream_in_endofpacket;
- if (!byteswapping)
+ if (byteswapping)
stream_out_data <= stream_in_data;
else
stream_out_data <= byteswap(stream_in_data);
I don't have access to Questa but I'll see what happens on a VHDL simulator. My instinct would be to double check that you ran make clean after making the change and check that Questa isn't caching the built RTL libraries somehow.
You are correct that there are some undocumented keyword arguments to the add_interface method of the scoreboard:
compare_fn can be any callable function
reorder_depth is an integer to permit re-ordering of transactions
If you supply a compare_fn it will be called with the transaction when it's received by a monitor, however this is quite a primitive mechanism. It's not scalable and only there for historical reasons (hence un-documented).
A better way is to subclass the Scoreboard class and define a custom compare method according to the following prototype:
def compare(self, got, exp, log, strict_type=True):
"""
Common function for comparing two transactions.
Can be re-implemented by a subclass.
"""
where got and exp are the received and expected transactions and log is a reference to the logger instance of the monitor (to provide more meaningful messages).
The top-level of the Endian Swapper example is provided as SystemVerilog code as well as VHDL code. The verilog code is used by default if not specified by compile options otherwise.
If I run:
make SIM=questa TOPLEVEL_LANG=vhdl
as given in the Quick Start Guide, everything works as expected. There is no need to specify a compare function in this case.

How to programatically create a detailed event like z3c.form does?

I have a simple event handler that looks for what has actually been changed (it's registered for a IObjectModifiedEvent events), the code looks like:
def on_change_do_something(obj, event):
modified = False
# check if the publication has changed
for change in event.descriptions:
if change.interface == IPublication:
modified = True
break
if modified:
# do something
So my question is: how can I programmatically generate those descriptions? I'm using plone.app.dexterity everywhere, so z3c.form is doing that automagically when using a form, but I want to test it with a unittest.
event.description is nominally an IModificationDescription object, which is essentially a list of IAttributes objects: each Attributes object having an interface (e.g. schema) and attributes (e.g. list of field names) modified.
Simplest solution is to create a zope.lifecycleevent.Attributes object for each field changed, and pass as arguments to the event constructor -- example:
# imports elided...
changelog = [
Attributes(IFoo, 'some_fieldname_here'),
Attributes(IMyBehaviorHere, 'some_behavior_provided_fieldname_here',
]
notify(ObjectModifiedEvent(context, *changelog)
I may also misunderstood something, but you may simple fire the event in your code, with the same parameters like z3c.form (Similar to the comment from #keul)?
After a short search in a Plone 4.3.x, I found this in z3c.form.form:
def applyChanges(self, data):
content = self.getContent()
changes = applyChanges(self, content, data)
# ``changes`` is a dictionary; if empty, there were no changes
if changes:
# Construct change-descriptions for the object-modified event
descriptions = []
for interface, names in changes.items():
descriptions.append(
zope.lifecycleevent.Attributes(interface, *names))
# Send out a detailed object-modified event
zope.event.notify(
zope.lifecycleevent.ObjectModifiedEvent(content, *descriptions))
return changes
You need two testcases, one which does nothing and one which goes thru your code.
applyChanges is in the same module (z3c.form.form) it iterates over the form fields and computes a dict with all changes.
You should set a break point there to inspect how the dict is build.
Afterwards you can do the same in your test case.
This way you can write readable test cases.
def test_do_something_in_event(self)
content = self.get_my_content()
descriptions = self.get_event_descriptions()
zope.event.notify(zope.lifecycleevent.ObjectModifiedEvent(content, *descriptions))
self.assertSomething(...)
IMHO mocking whole logic away may be a bad idea for future, if the code changes and probably works completely different, your test will be still fine.

Categories

Resources