I am developing ryu app. The app is basically a python script. The ryu apps are invoked by ryu-manager like this
ryu-manager {filename}
There are certain parameters that are taken by ryu-manager. I want to know if there is a way i could pass arguments to my file?
argparse module of python to parse command line options is there but am not sure it will work as all arguments I provide are used by ryu-manager not my script.
Any help would be appreciated.
I haven't tried this, but:
"Ryu currently uses oslo.config.cfg for command-line parsing.
(ryu/contrib/oslo/config).
There are several examples in the tree. ryu/app/tunnel_port_updater.py"
from
http://comments.gmane.org/gmane.network.ryu.devel/2709
see also
https://github.com/osrg/ryu/blob/master/ryu/app/tunnel_port_updater.py
The Ryu 'getting started' page simply suggests:
ryu-manager [--flagfile <path to configuration file>] [generic/application specific options…]
http://www.osrg.net/ryu/_sources/getting_started.txt
Doing so is a 4-step process. I'll show an example where you read parameters and then print them, but you could assign them to variables or do whatever you would like to by referencing this process.
Create a .conf file (e.g. params.conf)
#Example Conf File
[DEFAULT]
param1_int = 42
param2_str = "You read my data :)"
param3_list = 1,2,3
param4_float = 3.14
Add the following code to your __init__ method. I did this to the simple_switch_13.py which comes with ryu:
from ryu import cfg
:
:
class SimpleSwitch13(app_manager.RyuApp):
def __init__(self, *args, **kwargs):
super(SimpleSwitch13, self).__init__(*args, **kwargs)
:
CONF = cfg.CONF
CONF.register_opts([
cfg.IntOpt('param1_int', default=0, help = ('The ultimate answer')),
cfg.StrOpt('param2_str', default='default', help = ('A string')),
cfg.ListOpt('param3_list', default = None, help = ('A list of numbers')),
cfg.FloatOpt('param4_float', default = 0.0, help = ('Pi? Yummy.'))])
print 'param1_int = {}'.format(CONF.param1_int))
print 'param2_str = {}'.format(CONF.param2_str))
print 'param3_list = {}'.format(CONF.param3_list))
print 'param4_float = {}'.format(CONF.param4_float))
Run Script
ryu-manager paramtest.py --config-file [PATH/TO/FILE/params.conf]
Profit
I referenced the following when putting together my answer, they can provide more detail (such as the oslo.config stuff, which I had never heard of prior to running into this issue).
More info on oslo.config: http://www.giantflyingsaucer.com/blog/?p=4822
Ryu email chain on this issue: https://sourceforge.net/p/ryu/mailman/message/33410077/
I have not found a way to pass arguments to a ryu controller. One way that I have used to get around this is to pass arguments as an environment variable. For example, I have a program which invokes ryu-manager and needs to pass a parameter to the app. I do this as follows: ARG=value ryu-manager app.py
Related
I am new the usage of more "advanced" python. I decided to learn and implements unit testing for all my scripts.
Here is my issue :
I have a function from an external package I have made myself called gendiag. This package has a function "notify" that will send an email to a set recipient, defined in a config file, but leave the subject and the message as parameters :
gendiag.py :
import subprocess
#...
try:
configfile = BASE_PACKAGE_DIR.joinpath('config.yml')
with open(configfile, "r") as f:
config = yaml.safe_load(f)
except Exception as e:
print("Uh Oh!")
def notify(subject,message):
adress = config['mail']['nipt']
command = f'echo -e "{message}" | mail -s "{subject}" "{adress}"'
subprocess.run(command, shell=True)
In an other project called watercheck which import gendiag, I am using this function to get some info from every directory and send it as an email :
watercheck.py :
import gendiag as gdl
#...
def scan_water_metrics(indir_list):
for dir in indir_list:
#Do the things, this is a dummy example to simplify
some_info = os.path.basename(dir)
subject="Houlala"
message="I have parsed this directory, amazing.\n"
message += some_info
gdl.notify(subject,message)
Now in my test_watercheck.py, I would like to test that this function works with already created dummy data. But of course, I don't want to send an email to the rest of the world everytime I use pytest to see if email sending works. Instead, I was thinking I would create the following mock function in conftest.py :
conftest.py :
import gendiag
#pytest.fixture
def mock_scan_water_metrics():
mck = gendiag
mck.notify = mock.Mock(
return_value=subprocess.check_output(
"echo 'Hmm interesting' | mail -s Test my_own_email#gmule.com", shell=True
)
)
return mck
And then pass this mock to my test function in test_watercheck.py :
test_watercheck.py :
import gendiag
import pytest
from unittest import mock
from src import watercheck
def test_scan_water_metrics(mock_scan_water_metrics):
indir_list = ["tests/outdir/CORRECT_WATERCHECK","tests/outdir/BADWATER_WATERCHECK"]
water_check.scan_water_metrics(indir_list)
So this works in the sense that I am able to overwrite the email, but I would still like to test that some_info is collected properly, and for that I need to pass subject and message to the mock function. And this is the very confusing part for me. I don't doubt the answer is probably out there, but my understanding of the topic is too limited for me to find it out, or even formulate properly my question.
I have tried to read more about the object mock.Mock to see if I could collect the parameters somewhere, I have tried the following to see if I could access the parameters :
My attempt in conftest.py :
#pytest.fixture
#mock.patch("gendiag.notify_nipt")
def mock_scan_water_metrics(event):
print("Event : "+event)
args, kwargs = event.call_args
print("Args : "+args)
print("Kwargs : "+kwargs)
mck = gendiag
mck.notify = mock.Mock(
return_value=subprocess.check_output(
"echo 'Hmm interesting' | mail -s Test my_own_email#gmule.com", shell=True
)
)
return mck
I was hoping somewhere in args, I would find my two parameters, but when starting pytest, I have an error that the module "gendiag" does not exists, even though I had imported it everywhere just to be sure. I imagine the line causing it is the decorator here : #mock.patch("gendiag.notify_nipt"). I have tried with #mock.patch("gdl.notify_nipt") as well, as it is how it is called in the main function, with no success.
To be honest, I am really not sure where to go from here, it's getting too complex for me for now. How can I simply access to the parameters given to the function before it is decorated by pytest ?
I have situation like this. I am creating a Python package. That Python package needs to use Redis, so I want to allow the user of the package to define the Redis url.
Here's how I attempted to do it:
bin/main.py
from my_package.main import run
from my_package.config import config
basicConfig(filename='logs.log', level=DEBUG)
# the user defines the redis url
config['redis_url'] = 'redis://localhost:6379/0'
run()
my_package/config.py
config = {
"redis_url": None
}
my_package/main.py
from .config import config
def run():
print(config["redis_url"]) # prints None instead of what I want
Unfortunately, it doesn't work. In main.py the value of config["redis_url"] is None instead of the url defined in bin/main.py file. Why is that? How can I make it work?
I could pass the config to the run() function, but then if I run some other function I will need to pass the config to that function as well. I'd like to pass it one time ideally.
replace "Variables config" with "Variables /path/CLOUD234/__init__.py" in robot framework .Cloud instance is defined at run time .In each run the value changes ,so I have created a python file initpath.py as follows with fun() keyword .It will return the required path .How can I call it in Variables section of robot framework ? Thank you in advance.
import socket
import re
import os
def fun():
name = socket.gethostname()
pattern = ".*CLOUD[0-9]*"
hname = re.findall(pattern,name)
cloud_instance = hname[0].replace("-","_")
init_file = "/path/{}/__init__.py".format(cloud_instance)
return init_file
Variables section do not execute any code.
I suggest you run the python under testcase/suite up and use Set Test Variable to set the variable.
Try following: Here, you are not required to use any variables/section to call py file.
Under *** Settings *** section add
Library initpath.py
I'm using fabric to connect to remote host, when i'm there, I try to call a script that I made (It parses the file I give in argument). But when I call the script from inside my Fabfile.py, it assumes the path I gave is from the machine I launch the fabfile from (so not my remote host)
In my fabfile.py I have:
Import import servclasse
env.host='host1'
def listconf():
#here I browes to the correct folder
s=servclasse.Server("my.file") #this is where I want it to open the host1:my.file file and instanciate a classe from what it parsed
If i do this, it tries to open the file from the folder where servclass.py is. Is there a way to give a "remote path" in argument? I would rather not downloading the file.
Should I upload the script servclasse.py with the operation.put before calling it?
Edit: more info
In my servclasse I have this:
def __init__(self, path):
self.config = ConfigParser.ConfigParser(allow_no_value=True)
self.config.readfp(open(path))
The function open() was the problem.
I figured out how to do it so i'll drop it here in case someone read this topic one day :
def listconf():
#first I browes to the correct folder then
contents = StringIO.StringIO()
get("MyFile",contents)
contents.seek(0)
s=Server(contents)
and in the servclass.py
def __init__(self, objfile):
self.config = ConfigParser.ConfigParser(allow_no_value=True)
self.config.readfp(objfile)
#and i do my stuffs
I've been thinking about ways to automatically setup configuration in my Python applications.
I usually use the following type of approach:
'''config.py'''
class Config(object):
MAGIC_NUMBER = 44
DEBUG = True
class Development(Config):
LOG_LEVEL = 'DEBUG'
class Production(Config):
DEBUG = False
REPORT_EMAIL_TO = ["ceo#example.com", "chief_ass_kicker#example.com"]
Typically, when I'm running the app in different ways I could do something like:
from config import Development, Production
do_something():
if self.conf.DEBUG:
pass
def __init__(self, config='Development'):
if config == "production":
self.conf = Production
else:
self.conf = Development
I like working like this because it makes sense, however I'm wondering if I can somehow integrate this into my git workflow too.
A lot of my applications have separate scripts, or modules that can be run alone, thus there isn't always a monolithic application to inherit configurations from some root location.
It would be cool if a lot of these scripts and seperate modules could check what branch is currently checked out and make their default configuration decisions based upon that, e.g., by looking for a class in config.py that shares the same name as the name of the currently checked out branch.
Is that possible, and what's the cleanest way to achieve it?
Is it a good/bad idea?
I'd prefer spinlok's method, but yes, you can do pretty much anything you want in your __init__, e.g.:
import inspect, subprocess, sys
def __init__(self, config='via_git'):
if config == 'via_git':
gitsays = subprocess.check_output(['git', 'symbolic-ref', 'HEAD'])
cbranch = gitsays.rstrip('\n').replace('refs/heads/', '', 1)
# now you know which branch you're on...
tbranch = cbranch.title() # foo -> Foo, for class name conventions
classes = dict(inspect.getmembers(sys.modules[__name__], inspect.isclass)
if tbranch in classes:
print 'automatically using', tbranch
self.conf = classes[tbranch]
else:
print 'on branch', cbranch, 'so falling back to Production'
self.conf = Production
elif config == 'production':
self.conf = Production
else:
self.conf = Development
This is, um, "slightly tested" (python 2.7). Note that check_output will raise an exception if git can't get a symbolic ref, and this also depends on your working directory. You can of course use other subprocess functions (to provide a different cwd for instance).