I am having trouble understanding how to use to class properly. Calling the class constructor without a script automatically runs the CLI in interactive mode. Therefore you need to manually exit interactive mode to obtain the class instance. Only then can you call the class methods using said instance. This seems very strange.
What I am trying to do is write a program which configures the network and then opens several xterm windows on separate nodes and launches an application inside them. Is this possible?
Edit:
For example something like the following:
#!/usr/bin/python
from mininet.net import Mininet
from mininet.log import setLogLevel
from mininet.cli import CLI
from mininet.topolib import TreeTopo
def test():
"Create and test a simple network"
net = Mininet(TreeTopo(depth=2,fanout=2))
net.start()
cli = CLI(net)
CLI.do_xterm(cli, "h1 h2")
net.stop()
if __name__ == '__main__':
setLogLevel('info')
test()
Calling the CLI class constructor in order to obtain the class instance automatically launches mininet in interactive mode. This needs to be manually exited before the call to the do_xterm method can be envoked on the class instance.
I suppose a CLI is made to be used on stdin, so making use of scripting instead of programmatic manipulation of the CLI makes some sense.
If you want to obtain a reference to the cli object without interactive mode, you could make a workaround by creating an empty text file called "null_script" and then calling
cli = CLI(net, script='null_script')
Your real goal seems to be to programatically open xterms and have them run applications. Since you don't give a reason why you can't use scripts I propose a solution that uses a script. Put the following in a text file:
py h1.cmd('screen -dmS mininet.h1')
sh xterm -title Node:h1 -e screen -D -RR -S mininet.h1 &
sh screen -x -S mininet.h1 -X stuff 'ls'`echo '\015'`
Using this text file as a script in the cli works for me both using the 'source' command on CLI and by passing the filename into 'sript='.
I took the command arguments from the makeTerm function in term.py, and the screen stuff arguments from an answer on superuser. Just replace 'ls' with the name of the application you want to run.
Each screen you are trying to attach to needs to have a unique name, otherwise you will get a message listing the matching names and you will have to specify a pid for the correct session, which would complicate things.
Related
I am trying to create new nodes using the CloudClient in saltstack python API. Nodes are created successfully but I don't see any logging happening. Below is the code which I am using.
from salt.cloud import CloudClient
cloud_client = CloudClient()
kwargs = {'parallel': True}
cloud_client.map_run(path="mymap.map",**kwargs)
Is there way to run the same code in debug mode to see the output on console from this python script if logging cannot be done.
logging parameters in cloud
log_level: all
log_level_logfile: all
log_file: /var/logs/salt.log
When I try to run with sal-cloud on cli it is working with the below command:
salt-cloud -m mymap.map -P
I was able make it work by adding the below code
from salt.log.setup import setup_console_logger
setup_console_logger(log_level='debug')
So I am creating an application that can connect printers with a Python GUI that runs PowerShell scripts in the background. I was wondering if there was a way I could pass a variable inputted from a Python widget into a PowerShell script that is being invoked by Python. This variable would be the name of the printer that I could specify in Python so that I do not have to create separate scripts for each printer.
My code in Python that calls upon the PS script:
def connect():
if self.printerOpts.get() == 'Chosen Printer':
subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe",'-ExecutionPolicy','Unrestricted', '.\'./ScriptName\';'])
PS script that connects printer to computer:
Add-Printer -ConnectionName \\server\printer -AsJob
Basically, I am wondering if I can pass a variable from Python into the "printer" part of my PS script so that I do not have to create a different script for each printer that I would like to add.
A better way to do this would be completely in PowerShell or complete in Python.
What you're after is doable. You can pass it in the same way that you have passed -ExecutionPolicy Unrestricted, by ensuring that the PowerShell script is expecting the variable.
My Python is non-existant so please bear with if that part doesn't work.
Python
myPrinter # string variable in Python with printer name
subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe",'-ExecutionPolicy','Unrestricted', '.\'./ScriptName\';','-printer',myPrinter])
PowerShell
param(
$printer
)
Add-Printer -ConnectionName \\server\$printer -AsJob
The way that worked for me was first to specify that I was passing a variable as a string in my PS script:
param([string]$path)
Add-Printer -ConnectionName \\server\$path
My PS script was not expecting this variable. In my Python script I had to first define the my variable which named path as a string and then input "path" into the end of my subprocess function.
path = "c"
subprocess.call(["C:\\WINDOWS\\system32\\WindowsPowerShell\\v1.0\\powershell.exe",'-ExecutionPolicy','Unrestricted', 'Script.ps1', path])
I am invoking Robot Framework on a folder with a command like following:
robot --name MyTestSuite --variablefile lib/global_variables.py --variable TARGET_TYPE:FOO --variable IMAGE_TYPE:BAR --prerunmodifier MyCustomModifier.py ./tests
MyCustomModifier.py contains a simple SuiteVisitor class, which includes/excludes tags and does a few other things based on some of the variable values set.
How do I access TARGET_TYPE and IMAGE_TYPE in that class? The method shown here does not work, because I want access to the variables before tests start executing, and therefore I get a RobotNotRunningError with message Cannot access execution context.
After finding this issue report, I tried to downgrade to version 2.9.1 but nothing changed.
None of public API's seem to provide this information but debugging the main code does provide an alternative way of obtaining it. It has to be said that this example code will work with version 3.0.2, but may not work in the future as these are internal functions subject to change. That said, I do think that the approach will remain.
As Robot Framework is an application, it obtains the command line arguments through it's main function: run_cli (when running from command line). This function is filled with the arguments from the system itself and can be obtained throughout every python script via:
import sys
cli_args = sys.argv[1:]
Robot Framework has a function that interprets the commandline argument list and make it into a more readable object:
from robot.run import RobotFramework
import sys
options, arguments = RobotFramework().parse_arguments(sys.argv[1:])
The argument variable is a list where all the variables from the command line are added. An example:
arguments[0] = IMAGE_TYPE:BAR
This should allow you to access the information you need.
So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.
I read here that it might be possible to use python interpreter to access Odoo and test things interactively (https://www.odoo.com/forum/help-1/question/how-to-get-a-python-shell-with-the-odoo-environment-54096), but doing this in terminal:
ipython
import sys
import openerp
sys.argv = ['', '--addons-path=~/my-path/addons', '--xmlrpc-port=8067', '--log-level=debug', '-d test',]
openerp.cli.main()
it starts Odoo server, but I can't write anything in that terminal tab to use it interactively. If for example I write anything like print 'abc', I don't get any output. Am I missing something here?
Sometime I use "logging" library for print output on the console/terminal.
For example:
import logging
logging.info('Here is your message')
logging.warning('Here is your message')
For more details, You may checkout this reference link.
The closest thing I have found to interactive is put the line
import pdb; pdb.set_trace()
in the method I want to inspect, and then trigger that method.
It's clunky, but it works.
As an example, I was just enhancing the OpenChatter implementation for our copy of OpenERP, and during the "figure things out" stage I had that line in .../addons/mail/mail_thread.py::mail_thread.post_message so I could get a better idea of what was happening in that method.
The correct way to do this is with shell:
./odoo-bin shell -d <yourdatabase>
Please, be aware that if you already have an instance of odoo, the port will be busy. In that case, the instance you are opening should be using a different port. So the command should be something like this:
./odoo-bin shell --xmlrpc-port=8888 -d <yourdatabase>
But if you want to have your addons available in the new instance, yo can make something similar to the following:
./odoo-bin shell -c ~/odooshell.conf -d <yourdatabase>
This way you can have in your odooshell.conf whatever you need to have configured (port, addons_path, etc). This way you can work smoothly with your shell.
As I always use docker, this is what I do to have my shell configured in docker:
docker exec -ti <mycontainer> odoo shell -c /etc/odoo/odooshell.conf -d <mydatabase>
You will have the env available to do anything. You can create express python code to make whatever you need. The syntax is very similar to server actions. For example:
partner_ids = env['res.partner'].search([])
for partner in partner_ids:
partner['name'] = partner.name + '.'
env.cr.commit()
Remember to env.cr.commit() if you make any data change.