How to Generate Python Bindings for Thrift in Bazel - python

I'm attempting to generate Python bindings for a Thrift service definition using Bazel. As far as I've been able to tell, there is no existing .bzl for doing this so I'm somewhat on my own here. I've written .bzl rules in the past but the situation I'm running into in this case is different.
The general issue is that I don't know the names of the output files from the thrift command before the build starts which means that I can't generate a py_library rule with a srcs attribute set correctly since I don't have the names of the files. I've tried to follow examples whereby the output files are known ahead of time by way of generating a .zip file, but the py_library rule only allows .py files as srcs so this doesn't work.
The only thing I can think of would be to use a repository_rule to generate the code and BUILD files but what I'm trying to accomplish doesn't seem like much of a stretch and should be supported.

Someone has attempted this before.
Discussion here:
https://groups.google.com/forum/#!topic/bazel-dev/g3DVmhVhNZs
Code Here:
https://github.com/wt/bazel_thrift
I would start there.
Edit:
I started there. I did not get as far as I hoped. Bazel is being extended to support having multiple outputs generated by one input, but it does not allow that very easily just yet, per:
groups.google.com/forum/#!topic/bazel-discuss/3WQhHm194yU
Regardless, I did attempt something for C++ thrift bindings, which have the same issue. The Java example got around this by using the source jar as a build source, which won't work for us. To make it work, I passed in the list of source files I cared about that would be created by the thrift generator. I then reported these files as the output that would be generated in the impl. That seems to work. It is a bit nasty in that you have to know what files you are looking for before you build, but it does work. It would also be possible to have a small program read the thift file and determine the output files it would make. That would be nicer, but I don't have the time. Plus, the current approach is nice, in that it explicitly defines what files you are looking for thrift to generate, which makes the BUILD file a little bit easier to understand for a newbie like me.
First pass at some code, maybe I will clean it up and submit it as a patch (maybe not):
###########
# CPP gen
###########
# Create Generated cpp source files from thrift idl files.
#
def _gen_thrift_cc_src_impl(ctx):
out = ctx.outputs.outs
if not out:
# empty set
# nothing to do, no inputs to build
return DefaultInfo(files=depset(out))
# Used dir(out[0]) to see what
# we had available in the object.
# dirname attribute tells us the directory
# we should be putting stuff in, works nicely.
# ctx.genfile_dir is not the output directory
# when called as an external repository
target_genfiles_root = out[0].dirname
thrift_includes_root = "/".join(
[ target_genfiles_root, "thrift_includes"])
gen_cpp_dir = "/".join([target_genfiles_root,"." ])
commands = []
commands.append(_mkdir_command_string(target_genfiles_root))
commands.append(_mkdir_command_string(thrift_includes_root))
thrift_lib_archive_files = ctx.attr.thrift_library._transitive_archive_files
for f in thrift_lib_archive_files:
commands.append(
_tar_extract_command_string(f.path, thrift_includes_root))
commands.append(_mkdir_command_string(gen_cpp_dir))
thrift_lib_srcs = ctx.attr.thrift_library.srcs
for src in thrift_lib_srcs:
commands.append(_thrift_cc_compile_command_string(
thrift_includes_root, gen_cpp_dir, src))
inputs = (
list(thrift_lib_archive_files) + thrift_lib_srcs )
ctx.action(
inputs = inputs,
outputs = out,
progress_message = "Generating CPP sources from thift archive %s" % target_genfiles_root,
command = " && ".join(commands),
)
return DefaultInfo(files=depset(out))
thrift_cc_gen_src= rule(
_gen_thrift_cc_src_impl,
attrs = {
"thrift_library": attr.label(
mandatory=True, providers=['srcs', '_transitive_archive_files']),
"outs" : attr.output_list(mandatory=True, non_empty=True),
},output_to_genfiles = True,
)
# wraps cc_library to generate a library from one or more .thrift files
# provided as a thrift_library bundle.
#
# Generates all src and hdr files needed, but you must specify the expected
# files. This is a bug in bazel: https://groups.google.com/forum/#!topic/bazel-discuss/3WQhHm194yU
#
# Instead of src and hdrs, requires: cpp_srcs and cpp_hdrs. These are required.
#
# Takes:
# name: The library name, like cc_library
#
# thrift_library: The library of source .thrift files from which our
# code will be built from.
#
# cpp_srcs: The expected source that will be generated and built. Passed to
# cc_library as src.
#
# cpp_hdrs: The expected header files that will be generated. Passed to
# cc_library as hdrs.
#
# Rest of options are documented in native.cc_library
#
def thrift_cc_library(name, thrift_library,
cpp_srcs=[],cpp_hdrs=[],
build_skeletons=False,
deps=[], alwayslink=0, copts=[],
defines=[], include_prefix=None,
includes=[], linkopts=[],
linkstatic=0, nocopts=None,
strip_include_prefix=None,
textual_hdrs=[],
visibility=None):
# from our thrift_library tarball source bundle,
# create a generated cpp source directory.
outs = []
for src in cpp_srcs:
outs.append("//:"+src)
for hdr in cpp_hdrs:
outs.append("//:"+hdr)
thrift_cc_gen_src(
name = name + 'cc_gen_src',
thrift_library = thrift_library,
outs = outs,
)
# Then make the library for the given name.
native.cc_library(
name = name,
deps = deps,
srcs = cpp_srcs,
hdrs = cpp_hdrs,
alwayslink = alwayslink,
copts = copts,
defines=defines,
include_prefix=include_prefix,
includes=includes,
linkopts=linkopts,
linkstatic=linkstatic,
nocopts=nocopts,
strip_include_prefix=strip_include_prefix,
textual_hdrs=textual_hdrs,
visibility=visibility,
)

Related

Change Logdir of Ray RLlib Training instead of ~/ray_results

I'm using Ray & RLlib to train RL agents on an Ubuntu system. Tensorboard is used to monitor the training progress by pointing it to ~/ray_results where all the log files for all runs are stored. Ray Tune is not being used.
For example, on starting a new Ray/RLlib training run, a new directory will be created at
~/ray_results/DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1
To visualize the training progress, we need to start Tensorboard using
tensorboard --logdir=~/ray_results
Question: Is it possible to configure Ray/RLlib to change the output directory of the log files from ~/ray_results to another location?
Additionally, instead of logging to a directory named something like DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1, can this directory name by set by ourselves?
Failed Attempt: Tried setting
os.environ['TUNE_RESULT_DIR'] = '~/another_dir`
before running ray.init(), but the result log files were still being written to ~/ray_results.
Without using Tune, you can change the logdir using rllib's "Trainer". The "Trainer" class takes in an optional "logger_creator" if you want to specify where to save the log (see here).
A concrete example:
Define your customized logger creator (you can simply modify from the default one):
def custom_log_creator(custom_path, custom_str):
timestr = datetime.today().strftime("%Y-%m-%d_%H-%M-%S")
logdir_prefix = "{}_{}".format(custom_str, timestr)
def logger_creator(config):
if not os.path.exists(custom_path):
os.makedirs(custom_path)
logdir = tempfile.mkdtemp(prefix=logdir_prefix, dir=custom_path)
return UnifiedLogger(config, logdir, loggers=None)
return logger_creator
Pass this logger_creator to the trainer, and start training:
trainer = PPOTrainer(config=config, env='CartPole-v0',
logger_creator=custom_log_creator(os.path.expanduser("~/another_ray_results/subdir"), 'custom_dir'))
for i in range(ITER_NUM):
result = trainer.train()
You will find the training results (i.e., TensorBoard events file, params, model, ...) saved under "~/another_ray_results/subdir" with your specified naming convention.
Is it possible to configure Ray/RLlib to change the output directory of the log files from ~/ray_results to another location?
There is currently no way to configure this using RLib CLI tool (rllib).
If you're okay with Python API, then, as described in documentation, local_dir parameter of tune.run is responsible for specifying output directory, default is ~/ray_results.
Additionally, instead of logging to a directory named something like DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1, can this directory name by set by ourselves?
This is governed by trial_name_creator parameter of tune.run. It must be a function that accepts trial object and formats it into a string like so:
def trial_name_id(trial):
return f"{trial.trainable_name}_{trial.trial_id}"
tune.run(...trial_name_creator=trial_name_id)
Just for anyone who bumps into this problem with Ray Tune.
You can specify local_dir for run_config within tune.Tuner:
# This logs to 2 different trial folders:
# ./results/test_experiment/trial_name_1 and ./results/test_experiment/trial_name_2
# Only trial_name is autogenerated.
tuner = tune.Tuner(trainable,
tune_config=tune.TuneConfig(num_samples=2),
run_config=air.RunConfig(local_dir="./results", name="test_experiment"))
results = tuner.fit()
Please see this link for more info.

CANoe: How to select and start test cases from XML Test Module from Python using CANoe COM interface?

currently I am able to:
start CANoe application
load a CANoe configuration file
load a test setup file
def load_test_setup(self, canoe_test_setup_file: str = None) -> None:
logger.info(
f'Loading CANoe test setup file <{canoe_test_setup_file}>.')
if self.measurement.Running:
logger.info(
f'Simulation is currently running, so new test setup could \
not be loaded!')
return
self.test_setup.TestEnvironments.Add(canoe_test_setup_file)
test_environment = self.test_setup.TestEnvironments.Item(1)
logger.info(f'Loaded test environment is <{test_environment.Name}>.')
How can I access the XML Test Module loaded with the test setup (tse) file and select tests to be executed?
The last before line in your snippet is most probably causing the issue.
I have been trying to fix this issue for quite some time now and finally found the solution.
Somehow when you execute the line self.test_setup.TestEnvironments.Item(1)
win32com creates an object of type TestSetupItem which doesn't have the necessary properties or methods to access the test cases. Instead we want to access objects of collection types TestSetupFolders or TestModules. win32com creates object of TestSetupItem type even though I have a single XML Test Module (called AutomationTestSeq) in the Test Environment as you can see here.
There are three possible solutions that I found.
Manually clearing the generated cache before each run.
Using win32com.client.DispatchWithEvents or win32com.client.gencache.EnsureDispatch generates a bunch of python files that describe CANoe's object model.
If you had used either of those before, TestEnvironments.Item(1) will always return TestSetupItem instead of the more appropriate type objects.
To remove the cache you need to delete the C:\Users\{username}\AppData\Local\Temp\gen_py\{python version} folder.
Doing this every time is of course not very practical.
Force win32com to always use dynamic dispatch.
You can do this by using:
canoe = win32com.client.dynamic.Dispatch("CANoe.Application")
Any objects you create using canoe from now on, will be dynamically dispatched.
Forcing dynamic dispatch is easier than manually clearing the cache folder every time. This gave me good results always. But doing this will not let you have any insight into the objects. You won't be able to see the acceptable properties and methods for the objects.
Typecast TestSetupItem to TestSetupFolders or TestModules.
This has the risk that if you typecast incorrectly, you will get unexpected results. But has worked well for me so far.
In short: win32.CastTo(test_env, "ITestEnvironment2"). This will ensure that you are using the recommended object hierarchy as per CANoe technical reference.
Note that you will also have to typecast TestSequenceItem to TestCase to be able to access test case verdict and enable/disable test cases.
Below is a decent example script.
"""Execute XML Test Cases without a pass verdict"""
import sys
from time import sleep
import win32com.client as win32
CANoe = win32.DispatchWithEvents("CANoe.Application")
CANoe.Open("canoe.cfg")
test_env = CANoe.Configuration.TestSetup.TestEnvironments.Item('Test Environment')
# Cast required since test_env is originally of type <ITestEnvironment>
test_env = win32.CastTo(test_env, "ITestEnvironment2")
# Get the XML TestModule (type <TSTestModule>) in the test setup
test_module = test_env.TestModules.Item('AutomationTestSeq')
# {.Sequence} property returns a collection of <TestCases> or <TestGroup>
# or <TestSequenceItem> which is more generic
seq = test_module.Sequence
for i in range(1, seq.Count+1):
# Cast from <ITestSequenceItem> to <ITestCase> to access {.Verdict}
# and the {.Enabled} property
tc = win32.CastTo(seq.Item(i), "ITestCase")
if tc.Verdict != 1: # Verdict 1 is pass
tc.Enabled = True
print(f"Enabling Test Case {tc.Ident} with verdict {tc.Verdict}")
else:
tc.Enabled = False
print(f"Disabling Test Case {tc.Ident} since it has already passed")
CANoe.Measurement.Start()
sleep(5) # Sleep because measurement start is not instantaneous
test_module.Start()
sleep(1)
Just continue what you have done.
The TestEnvironment contains the TestModules. Each TestModule contains a TestSequence which in turn contains the TestCases.
Keep in mind that you cannot individual TestCases but only the TestModule. But you can enable and disable individual TestCases before execution by using the COM-API.
(typing this from the top of my head, might not work 100%)
test_module = test_environment.TestModules.Item(1) # of 2 or whatever
test_sequence = test_module.Sequence
for i in range(1, test_sequence.Count + 1):
test_case = test_sequence.Item(i)
if ...:
test_case.Enabled = False # or True
test_module.Start()
You have to keep in mind that a TestSequence can also contain other TestSequences (i.e. a TestGroup). This depends on how your TestModule is setup. If so, you have to take care of that in your loop and descend into these TestGroups while searching for your TestCase of interest.

Generate Protobuf Python source with Meson

Just learning how to use Meson and want to generate protobuf source/headers for multiple languages - C++, Python, Java, Javascript. C++ was simple enough using the generator function in my meson.build file:
project('MesonProtobufExample', 'cpp')
protoc = find_program('protoc', required : true)
deps = dependency('protobuf', required : true)
gen = generator(protoc, \
output : ['#BASENAME#.pb.cc', '#BASENAME#.pb.h'],
arguments : ['--proto_path=#CURRENT_SOURCE_DIR#', '--cpp_out=#BUILD_DIR#', '#INPUT#'])
generated = gen.process('MyExample.proto')
ex = executable('my_example', 'my_example.cpp', generated, dependencies : deps)
Which produces the MyExample.pb.cc and MyExample.pb.h files. I figured Python would be just as easy but I'm a bit stumped since there's no executable() step for my Python script since it doesn't need to be compiled. I noticed meson (and CMake it turns out) don't actually generate the protobuf files until you call executable() so I can't just skip this step or the MyExample_pb2.py file will not be generated. I have found no example for using meson/python/GPB together after several hours of searching. Shouldn't there be a simple way to 'link' the generated sources to a python file/module like the way CMake does?
protobuf_generate_python(PROTO_PY MyExample.proto)
# This command causes the protobuf python binding to be generated
add_custom_target(my_example.py ALL DEPENDS ${PROTO_PY})
You can use trick with custom_target() and "fake compiler" in the form of cp or cat tools (in -nix environments, of course, if you want to support Windows then you can use conditional find_program()). Here is the example with cp:
py_gen = generator( ... )
py_generated = gen.process('MyExample.proto')
py_proc = custom_target('py_proto',
command: [ 'cp', '#INPUT#', '#OUTPUT#' ],
input : py_generated,
output : 'MyExample_pb2.py',
build_by_default : true)
I added buid_by_default flag assuming that you need to generate it as a part of standard build process (of course, enabling this target can be conditional too).

Install issues with 'lr_utils' in python

I am trying to complete some homework in a DeepLearning.ai course assignment.
When I try the assignment in Coursera platform everything works fine, however, when I try to do the same imports on my local machine it gives me an error,
ModuleNotFoundError: No module named 'lr_utils'
I have tried resolving the issue by installing lr_utils but to no avail.
There is no mention of this module online, and now I started to wonder if that's a proprietary to deeplearning.ai?
Or can we can resolve this issue in any other way?
You will be able to find the lr_utils.py and all the other .py files (and thus the code inside them) required by the assignments:
Go to the first assignment (ie. Python Basics with numpy) - which you can always access whether you are a paid user or not
And then click on 'Open' button in the Menu bar above. (see the image below)
.
Then you can include the code of the modules directly in your code.
As per the answer above, lr_utils is a part of the deep learning course and is a utility to download the data sets. It should readily work with the paid version of the course but in case you 'lost' access to it, I noticed this github project has the lr_utils.py as well as some data sets
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments
Note:
The chinese website links did not work when I looked at them. Maybe the server storing the files expired. I did see that this github project had some datasets though as well as the lr_utils file.
EDIT: The link no longer seems to work. Maybe this one will do?
https://github.com/knazeri/coursera/blob/master/deep-learning/1-neural-networks-and-deep-learning/2-logistic-regression-as-a-neural-network/lr_utils.py
Download the datasets from the answer above.
And use this code (It's better than the above since it closes the files after usage):
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
"lr_utils" is not official library or something like that.
Purpose of "lr_utils" is to fetch the dataset that is required for course.
option (didn't work for me): go to this page and there is a python code for downloading dataset and creating "lr_utils"
I had a problem with fetching data from provided url (but at least you can try to run it, maybe it will work)
option (worked for me): in the comments (at the same page 1) there are links for manually downloading dataset and "lr_utils.py", so here they are:
link for dataset download
link for lr_utils.py script download
Remember to extract dataset when you download it and you have to put dataset folder and "lr_utils.py" in the same folder as your python script that is using it (script with this line "import lr_utils").
The way I fixed this problem was by:
clicking File -> Open -> You will see the lr_utils.py file ( it does not matter whether you have paid/free version of the course).
opening the lr_utils.py file in Jupyter Notebooks and clicking File -> Download ( store it in your own folder ), rerun importing the modules. It will work like magic.
I did the same process for the datasets folder.
You can download train and test dataset directly here: https://github.com/berkayalan/Deep-Learning/tree/master/datasets
And you need to add this code to the beginning:
import numpy as np
import h5py
import os
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
I faced similar problem and I had followed the following steps:
1. import the following library
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
2. download the train_catvnoncat.h5 and test_catvnoncat.h5 from any of the below link:
[https://github.com/berkayalan/Neural-Networks-and-Deep-Learning/tree/master/datasets]
or
[https://github.com/JudasDie/deeplearning.ai/tree/master/Improving%20Deep%20Neural%20Networks/Week1/Regularization/datasets]
3. create a folder named datasets and paste these two files in this folder.
[ Note: datasets folder and your source code file should be in same directory]
4. run the following code
def load_dataset():
with h5py.File('datasets1/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets1/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
5. Load the data:
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
check datasets
print(len(train_set_x_orig))
print(len(test_set_x_orig))
your data set is ready, you may check the len of the train_set_x_orig, train_set_y variable. For mine, it was 209 and 50
I could download the dataset directly from coursera page.
Once you open the Coursera notebook you go to File -> Open and the following window will be display:
enter image description here
Here the notebooks and datasets are displayed, you can go to the datasets folder and download the required data for the assignment. The package lr_utils.py is also available for downloading.
below is your code, just save your file named "lr_utils.py" and now you can use it.
import numpy as np
import h5py
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
if your code file can not find you newly created lr_utils.py file just write this code:
import sys
sys.path.append("full path of the directory where you saved Ir_utils.py file")
Here is the way to get dataset from as #ThinkBonobo:
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments/datasets
write a lr_utils.py file, as above answer #StationaryTraveller, put it into any of sys.path() directory.
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
....
!!! BUT make sure that you delete 'datasets/', cuz now the name of your data file is train_catvnoncat.h5
restart kernel and good luck.
I may add to the answers that you can save the file with lr_utils script on the disc and import that as a module using importlib util function in the following way.
The below code came from the general thread about import functions from external files into the current user session:
How to import a module given the full path?
### Source load_dataset() function from a file
# Specify a name (I think it can be whatever) and path to the lr_utils.py script locally on your PC:
util_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_utils.py")
# Make a module
load_utils = importlib.util.module_from_spec(util_script)
# Execute it on the fly
util_script.loader.exec_module(load_utils)
# Load your function
load_utils.load_dataset()
# Then you can use your load_dataset() coming from above specified 'module' called load_utils
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_utils.load_dataset()
# This could be a general way of calling different user specified modules so I did the same for the rest of the neural network function and put them into separate file to keep my script clean.
# Just remember that Python treat it like a module so you need to prefix the function name with a 'module' name eg.:
# d = nnet_utils.model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1000, learning_rate = 0.005, print_cost = True)
nnet_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_nnet.py")
nnet_utils = importlib.util.module_from_spec(nnet_script)
nnet_script.loader.exec_module(nnet_utils)
That was the most convenient way for me to source functions/methods from different files in Python so far.
I am coming from the R background where you can call just one line function source() to bring external scripts contents into your current session.
The above answers didn't help, some links had expired.
So, lr_utils is not a pip library but a file in the same notebook as the CourseEra website.
You can click on "Open", and it'll open the explorer where you can download everything that you would want to run in another environment.
(I used this on a browser.)
This is how i solved mine, i copied the lir_utils file and paste it in my notebook thereafter i downloaded the dataset by zipping the file and extracting it. With the following code. Note: Run the code on coursera notebook and select only the zipped file in the directory to download.
!pip install zipfile36
zf = zipfile.ZipFile('datasets/train_catvnoncat_h5.zip', mode='w')
try:
zf.write('datasets/train_catvnoncat.h5')
zf.write('datasets/test_catvnoncat.h5')
finally:
zf.close()

How to setup buildbot for

I am quite new to buildbot and struggling to create a configuration for the following python code structure:
A library containing some general classes and functions and two programs who depend on the one library. All three have their own git repository. Lets call the library the_lib and the programs prog_a and prog_b.
What I would like buildbot to do for me is periodically check the repositories for changes and if so rebuild what is necessary. So a change to the source of the_lib should rebuild all three, a change to the source of prog_a should only rebuild prog_a and a change to the source of prog_b should only rebuild prog_b.
I am at the point where I am able to build any of the three when its source changes but how do I introduce de dependency of prog_a and prog_b on the_lib?
Cheers,
Feoh
You can trigger multiple builders with a single source change, in the following example the first two each trigger their own builds, but the third one triggers all three:
yield basic.AnyBranchScheduler(
name = prog_a, treeStableTimer=delay,
change_filter = my_a_filter,
builderNames = [prog_a],
)
yield basic.AnyBranchScheduler(
name = prog_b, treeStableTimer=delay,
change_filter = my_b_filter,
builderNames = [prog_b],
)
yield basic.AnyBranchScheduler(
name = the_lib, treeStableTimer=delay,
change_filter = my_lib_filter,
builderNames = [prog_a, prog_b, the_lib],
)
For the changes in the prog_(a|b) you can use a simple single branch scheduler that will call their builders.
For the_lib you have two options:
Create a Dependant scheduler for the builders of prog_a and prog_b, and set the upstream scheduler as the single branch scheduler of the_lib.
Configure for the prog_(a|b) a Triggerable scheduler, and trigger them using the Trigger build step from the_lib builder.

Categories

Resources