Is it possible to instantiate a Flyte Task at runtime so that I can create a Workflow with a variable number of Tasks and with each Task running a runtime-determined Python callable? In the documentation, I only see references to compile-time Workflows that are declaratively composed of Python function annotated with the #Task decorator.
If you can provide any existing examples in open source code or a new, small inline example, please do! Thanks!
Have you looked at dynamic workflows https://docs.flyte.org/projects/cookbook/en/stable/auto/core/control_flow/dynamics.html.
Dynamic in Flyte is like JITing in a language like Java. The new workflow graph is created, compiled, verified and then executed. But the graph is created in response to the inputs and you control the shape / structure at runtime
The functionality I was looking for is provided by the FlyteRemote class. With this class, one can register instantiated entities, i.e. tasks, workflows, and launchplans.
Related
Question - What approach or design pattern would make it easy in Django to use stubs for external integrations when running tests? With 'external integrations' I mean a couple of external REST APIs, and NAS file system. The external integrations are already separate modules/classes.
What I do now -
Currently, I disable external dependencies in tests mainly by sprinkling mock.path() statements across my test code.
But this is getting unpractical (needs to be applied per test; also easy to forget especially in more high-level tests), and links too much to the internals of certain modules.
Some details of what I am looking for
I like the concept of 'nullable infrastructure' described at
https://www.jamesshore.com/v2/blog/2018/testing-without-mocks#nullable-infrastructure.
I am especially looking for an approach that integrates well with Django, i.e. considering the settings.py file approach, and running tests via python manage.py test.
I would like to be able to easily:
state that all tests should use the nullable counterpart of an infrastructure class or function
override that behaviour per test, or test class, when needed (e.g. when testing the actual external infrastructure).
I tried the approach outlined in https://medium.com/analytics-vidhya/mocking-external-apis-in-django-4a2b1c9e3025, which basically says to create an interface implementation, a real implementation and a stub implementation. The switching is done using a django setting parameter and a class decorator on the interface class (which returns the chosen class, rather than the interface). But it isn't working out very well: the class decorator in my setup does not work with #override_settings (the decorator applies the settings upon starting of django, not when running the test), and there is really a lot of extra code (which also feels un-pythonic).
I am writing a program which interacts with many external services such as Gmail and Discord through their respective SDKs. The problem I am running into is that the program makes a lot of network calls and runs expensive computations in development I would rather avoid with stub objects. The SDKs I am using expose their functionality through standard Python classes with type hints. At the moment, I am creating stubs for them manually but it would not be feasible in the long run.
For example, this is a simplified example to illustrate what I am trying to achieve.
#dataclass
class EmailReceipt:
receiver_email_address:str
email_text:str
...
class GmailService:
...
def send_email(receiver_email:str)->EmailReceipt:
"A network call is made to send email address here"
# More methods follow
...
class GmailServiceStub:
def send_email((receiver_email:str))->>EmailReceipt:
"The code instantiates a random object of class EmailReceipt and returns it"
# More stub methods follow
In development I would like to avoid making a request to the mail server, so I am creating a mock class. The codebase uses dependency injection throughout so it is trivial to swap different versions of GmailService class. I am using mocked versions of external servives for rapid development but I think it could also be used for testing.
All I am doing here is just implementing the contract, that send_email method returns an instance of EmailReceipt disregarding any domain logic so that it can be used downstream by other classes. At the moment, it is just 2 services with 10 methods in total but it is growing and I would rather have a tool or a library generate it for me.
So I am wondering if there is a tool or a library which could do it or something close to it, ideally with this interface.
mocked_service=mocker.mock_class(service)
# All methods of mocked_service return appropriate objects/types which can be used downstream.
If it is not possible in Python, are there other programming languages where this would be possible?
I spent quite some time to build complex apps in Dash. I have interfaces for "custom components", wrappers for id-property pairs, a way to keep track of my component ids and means to re-use the components I created.
I'm using the several transforms provided by dash-extensions, without which some of my solutions would not work.
I mirrored custom types in my program logic code by providing converter classes, which convert custom type objects (e.g. Measurement) to and from dictionary representations, which I can then pass through dash callbacks.
Now, in a couple of situations, we are actually using Dash as a "desktop app", thus, it is not actually run on a server, but localhost, and there is only ever one user. Thus, the "no globals" restriction is void, isn't it? I could safely rely on global constants in my Dash app, if I only run it on localhost, correct?
Moreover, I create my custom components inside a class, which also has a method registerCallbacks() that creates all callbacks necessary for the component. These callbacks could then also rely on class members, correct? So I could save the converter functions and would not need to pass my "objects" via Inputs/Outputs.
Are there any problems with this idea?
This question is very generic but I don't think it is opinion based. It is about software design and the example prototype is in python:
I am writing a program which goal it is to simulate some behaviour (doesn't matter). The data on which the simulation works is fixed, but the simulated behaviour I want to change at every startup time. The simulation behaviour can't be changed at runtime.
Example:
Simulation behaviour is defined like:
usedMethod = static
The program than looks something like this:
while(true)
result = static(object) # static is the method specified in the behaviour
# do something with result
The question is, how is the best way to deal with exchangeable defined functions? So another run of the simulation could look like this
while(true)
result = dynamic(object)
if dynamic is specified as usedMethod. The first thing that came in my mind was an if-else block, where I ask, which is the used method and then execute this on. This solution would not be very good, because every time I add new behaviour I have to change the if-else block and the if-else block itself would maybe cost performance, which is important, too. The simulations should be fast.
So a solution I could think of was using a function pointer (output and input of all usedMethods should be well defined and so it should not be a problem). Then I initalize the function pointer at startup, where the used method is defined.
The problem I currently have, that the used method is not a function per-se, but is a method of a class, which depends heavily on the intern members of this class, so the code is more looking like this:
balance = BalancerClass()
while(true)
result = balance.static(object)
...
balance.doSomething(input)
So my question is, what is a good solution to deal with this problem?
I thought about inheriting from the balancerClass (this would then be an abstract class, I don't know if this conecpt exists in python) and add a derived class for every used method. Then I create the correct derived object which is specified in the simulation behaviour an run-time.
In my eyes, this is a good solution, because it encapsulates the methods from the base class itself. And every used method is managed by its own class, so it can add new internal behaviour if needed.
Furthermore the doSomething method shouldn't change, so therefore it is implemented the base class, but depends on the intern changed members of the derived class.
I don't know in general if this software design is good to solve my problem or if I am missing a very basic and easy concept.
If you have a another/better solution please tell me and it would be good, if you provide the advantages/disadvantages. Also could you tell me advantages/disadvantages of my solution, which I didn't think of?
Hey I can be wrong but what you are looking for boils down to either dependency injection or strategy design pattern both of which solve the problem of executing dynamic code at runtime via a common interface without worrying about the actual implementations. There are also much simpler ways just like u desrcibed creating an abstract class(Interface) and having all the classes implement this interface.
I am giving brief examples fo which here for your reference:
Dependecy Injection(From wikipedia):
In software engineering, dependency injection is a technique whereby one object supplies the dependencies of another object. A "dependency" is an object that can be used, for example as a service. Instead of a client specifying which service it will use, something tells the client what service to use. The "injection" refers to the passing of a dependency (a service) into the object (a client) that would use it. The service is made part of the client's state.
Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.
Python does not have such a conecpt inbuilt in the language itself but there are packages out there that implements this pattern.
Here is a nice article about this in python(All credits to the original author):
Dependency Injection in Python
Strategy Pattern: This is an anti-pattern to inheritance and is an example of composition which basically means instead of inheriting from a base class we pass the required class's object to the constructor of classes we want to have the functionality in. For example:
Suppose you want to have a common add() operation but it can be implemented in different ways(add two numbers or add two strings)
Class XYZ():
def __constructor__(adder):
self.adder = adder
The only condition being all adders passed to the XYZ class should have a common Interface.
Here is a more detailed example:
Strategy Pattern in Python
Interfaces:
Interfaces are the simplest, they define a set of common attributes and methods(with or without a default implementation). Any class then can implement an interface with its own functionality or some shared common functionality. In python Interfaces are implemented via abc package.
I'm wondering how to go about implementing a macro recorder for a python gui (probably PyQt, but ideally agnostic). Something much like in Excel but instead of getting VB macros, it would create python code. Previously I made something for Tkinter where all callbacks pass through a single class that logged actions. Unfortunately my class doing the logging was a bit ugly and I'm looking for a nicer one. While this did make a nice separation of the gui from the rest of the code, it seems to be unusual in terms of the usual signals/slots wiring. Is there a better way?
The intention is that a user can work their way through a data analysis procedure in a graphical interface, seeing the effect of their decisions. Later the recorded procedure could be applied to other data with minor modification and without needing the start up the gui.
You could apply the command design pattern: when your user executes an action, generate a command that represents the changes required. You then implement some sort of command pipeline that executes the commands themselves, most likely just calling the methods you already have. Once the commands are executed, you can serialize them or take note of them the way you want and load the series of commands when you need to re-execute the procedure.
Thinking in high level, this is what I'd do:
Develop a decorator function, with which I'd decorate every event-handling functions.
This decorator functions would take note of thee function called, and its parameters (and possibly returning values) in a unified data-structure - taking care, on this data structure, to mark Widget and Control instances as a special type of object. That is because in other runs these widgets won't be the same instances - ah, you can't even serialize a toolkit widget instances, be it Qt or otherwise.
When the time comes to play a macro, you fill-in the gaps replacing the widget-representating object with the instances of the actually running objects, and simply call the original functions with the remaining parameters.
In toolkits that have an specialized "event" parameter that is passed down to event-handling functions, you will have to take care of serializing and de-serializing this event as well.
I hope this can help. I could come up with some proof of concept code for that (although I am in a mood to use tkinter today - would have to read a lot to come up with a Qt4 example).
An example of what you're looking for is in mayavi2. For your purposes, mayavi2's "script record" functionality will generate a Python script that can then be trivially modified for other cases. I hear that it works pretty well.