I am trying to write a python CLI program using module python cmd. When I try to execute another python script in my CLI program my objective is I have some python script in other folder and CLI program in other folder. I am trying to execute those python script using CLI program.
Below is the os.popen method used to execute other script there is CLI program:
import cmd
import os
import sys
class demo(cmd.Cmd):
def do_shell(self,line,args):
"""hare is function to execute the other script"""
output = os.popen('xterm -hold -e python %s' % args).read()
output(sys.argv[1])
def do_quit(self,line):
return True
if __name__ == '__main__':
demo().cmdloop()
and hare is error:
(Cmd) shell demo-test.py
Traceback (most recent call last):
File "bemo.py", line 18, in <module>
demo().cmdloop()
File "/usr/lib/python2.7/cmd.py", line 142, in cmdloop
stop = self.onecmd(line)
File "/usr/lib/python2.7/cmd.py", line 221, in onecmd
return func(arg)
TypeError: do_shell() takes exactly 3 arguments (2 given)
there is some link to other cmd CLI program
1 = cmd – Create line-oriented command processors
2 = Console built with Cmd object (Python recipe)
and some screen shot's for more information:
Please run above code in your system.
As specified in the doc:
https://pymotw.com/2/cmd/index.html
do_shell is defined as such:
do_shell(self, args):
But you are defining it as
do_shell(self, line, args):
I think the intended use is define it as specified from the documentation.
I ran your code and followed your example. I replicated your error. I then, as specified in the documentation for do_shell, I changed the method to the as expected:
do_shell(self, args):
From there, the sys module was missing, so you need to import that as well (unless you did not copy it from your source). After that, I got an error for index out of range, probably because of the expectation of extra parameters needing to be passed.
Furthermore, because you are talking about Python scripts, I don't see the need for the extra commands you are adding, I simply changed the line to this:
output = os.popen('python %s' % args).read()
However, if there is a particular reason you need the xterm command, then you can probably put that back and it will work for your particular case.
I also, did not see the use case for this:
output(sys.argv[1])
I commented that out. I ran your code, and everything worked. I created a test file that just did a simple print and it ran successfully.
So, the code actually looks like this:
def do_shell(self, args):
"""hare is function to execute the other script"""
output = os.popen('python %s' % args).read()
print output
The full code should look like this:
import cmd
import os
import sys
class demo(cmd.Cmd):
def do_shell(self, args):
"""hare is function to execute the other script"""
output = os.popen('python %s' % args).read()
print output
def do_quit(self,line):
return True
if __name__ == '__main__':
demo().cmdloop()
Related
I have my fabfile.py like this. this I run with 'fab2 deploy` in command line.
from fabric2 import task
hosts = ["host1"]
#task(hosts=hosts)
def deploy(c):
with c.cd("/tmp"):
c.run("uptime")
if __name__ == "__main__":
deploy()
Like to run this code with-in python program like python3 fabfile.py. but its giving the error message.
if not isinstance(args[0], Context):
IndexError: tuple index out of range
Is it possible to call with python code?
Thanks
I am trying to run a python script within another python script. Which will run 10 times and produce 10 outputs.
I want to run program1.py inside program2.py. Now my program1.py was initially taking a C executable inside it and it takes 1 command line argument.
The program1.py looks like below:
import os
import sys
dataset = sys.argv[1]
os.system(f"/home/Dev/c4.5 -u -f {dataset}")
os.system(f"/home/Dev/c4.5rules -u -f {dataset}")
os.system(f"/home/Dev/c4.5rules -u -f {dataset} > Temp")
f = open('Temp')
# Some code
Where c4.5 and c4.5rules are the name of the executable files. To run this I was using python3 program1.py dataset_name
Now I am trying to put this program1.py inside program2.py and I am trying this below approach:
import os
import subprocess
# Some code
for track in range(0, 10):
with open(f'Train_{track}', 'r') as firstfile, open(f'DF_{track}.data', 'w') as secondfile:
for line in firstfile:
secondfile.write(line)
os.system("/home/Dev/program1.py DF_track")
#subprocess.Popen("/home/Dev/program1.py DF_track", shell=True)
Where I simply want to get the output of program1.py 10 times and want to use DF_track as the command line input for each output generation.
Using above approach I am getting lots of error. Please help.
Edit_1 :
Actually whenever I am trying to run, my cursor is not working, it is freezing, so unable to copy the errors.
Here are some of them :
1. attempt to perform an operation not allowed by security policy.
2. syntax error : word expected (expecting ")")
Imagine I have 2 files, the first file is a.py and the other is b.py and I want to call the a.py from b.py.
The content of a.py is:
print('this is the a.py file')
and the content of b.py is:
import os
stream = os.popen('python3 a.py')
output = stream.read()
print(output)
Now when I call b.py from terminal I get the output I expect which is a.py print statment
user#mos ~ % python3 b.py
this is the a.py file
You can do this with subprocess too instead of os module.
Here is a nice blog I found online where I got the code from: https://janakiev.com/blog/python-shell-commands/
See the example below.
a.py
def do_something():
pass
b.py
from a import do_something
do_something()
I have a batch file which is running a python script and in the python script, I have a subprocess function which is being ran.
I have tried subprocess.check_output, subprocess.run, subprocess.Popen, all of them returns me an empty string only when running it using a batch file.
If I run it manually or using an IDE, I get the response correctly. Below is the code for subprocess.run:
response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True)
self.writeInLog(' Command Response: \t' + str(response))
Response is in stdout=b''
When ran in batch file and from task scheduler:
Command Response: CompletedProcess(args='tableau refreshextract
--config-file "Z:\XXX\tableau_config\SampleSuperStore.txt"',
returncode=0, stdout=b'', stderr=b'')
When ran manually or in IDE:
Command Response: CompletedProcess(args='tableau refreshextract
--config-file "Z:\XXX\tableau_config\SampleSuperStore.txt"',
returncode=0, stdout=b'Data source refresh completed.\r\n0 rows uploaded.\r\n', stderr=b'')
Batch file which runs the python program. Parameters are parsed to the python application
SET config=SampleSuperStore.txt
CALL C:\XXX\AppData\Local\Continuum\anaconda3\Scripts\activate.bat
C:\XXX\AppData\Local\Continuum\anaconda3\python.exe Z:\XXX\pMainManual.py "%config%"
Why is that??
--Complete python code---
try:
from pWrapper import wrapper
import sys
except Exception as e:
print(str(e))
class main:
def __init__(self):
self.tableauPath = 'C:\\Program Files\\Tableau\\Tableau 2018.3\\bin\\'
self.tableauCommand = 'tableau refreshextract --config-file'
def runJob(self,argv):
self.manual_sProcess(argv[1])
def manual_sProcess(self,tableauConfigFile):
new_wrapper = wrapper()
new_wrapper.tableauSetup(self.tableauPath,self.tableauCommand)
if new_wrapper.tableauConfigExists(tableauConfigFile):
new_wrapper.tableauCommand(tableauConfigFile)
if __name__ == "__main__":
new_main = main()
new_main.runJob(sys.argv)
Wrapper class:
def tableauCommand(self,tableauConfigFile):
command = self.setTableauExeDirectory + ' ' + self.refreshConfigCommand + ' "' + tableauConfigFile + '"'
self.new_automateTableauExtract.runCommand(tableauConfigFile,command,self.refreshConfigCommand,self.tableauFilePath,self.setTableauExeDirectory)
Automate Class:
def runCommand(self,pConfig,pCommand,pRefreshConfigCommand,pFilePath,pSetTableauExeDirectory):
try:
fileCommand = pRefreshConfigCommand + ' "' + pFilePath + '"'
response = subprocess.run(fileCommand, shell=True, cwd=pSetTableauExeDirectory, capture_output=True)
self.writeInLog(' Command Response: \t' + str(response))
except Exception as e:
self.writeInLog('Exception in function runCommand: ' + str(e))
UPDATE: I initially thought that the bat file was causing this issue but it looks like it works when running manually a batch file but not when it is set on task scheduler
Updated
First of all, if there is a need to run anaconda-prompt by calling activate.bat file, you can simply do as follows:
import subprocess
def call_anaconda_venv():
subprocess.call('python -m venv virtual.env')
subprocess.call('cmd.exe /k /path/venv/Scripts/activate.bat')
if __name__ == "__main__":
call_anaconda_venv()
The result of the above code would be a running instance of anaconda-prompt as required.
Now as Problem Seems Like:
I have a batch file which is running a python script and in the python script, I have a subprocess function which is being run.
I have implemented the same program as required; Suppose we have
Batch File ---> script.bat **** includes a command to run python script i.e test.py. ****
Python Script File ---> test.py **** includes a method to run commands using subprocess. ****
Batch File ---> sys_info.bat **** includes a command which would give the system information of my computer. ****
Now First, script.bat includes a command that will run the required python script as given below;
python \file_path\test.py
pause
Here, pause command is used to prevent auto-closing console after execution. Now we have test.py, python script which includes subprocess method to run required commands and get their output.
from subprocess import check_output
class BatchCommands:
#staticmethod
def run_commands_using_subprocess(commands):
print("Running commands from File: {}".format(commands))
value = check_output(commands, shell=True).decode()
return value
#staticmethod
def run():
commands_from_file = "\file-path\sys_info.bat"
print('##############################################################')
print("Shell Commands using >>> subprocess-module <<<")
print('##############################################################')
values = BatchCommands.run_commands_using_subprocess(commands_from_file)
print(values)
if __name__ == '__main__':
BatchCommands.run()
Now, in the end, I have a sys_info.bat file which includes commands to renew the IP-Adress of my computer. Commands in sys_info.bat file are as follows;
systeminfo
Place multiple commands in sys_info.bat file, then you can also run multiple commands at a time like:
ipconfig/all
ipconfig/release
ipconfig/reset
ipconfig/renew
ipconfig
Before to use the file, set all files directory paths, and run the batch file i.e script.py in command-prompt as follows;
Run command-prompt or terminal as an administrator.
run \file_path\script.py
Here is the result after running the batch file in the terminal.
This is happening because your ide is not running in a shell that works in the way that open subprocess is expecting.
If you set SHELL=False and specify the absolute path to the batch file it will run.
you might still need the cwd if the batch file requires it.
I am trying to use Python to open another file. This file is going to start up a socket and create threads for listening for additional connections, and threads for sending/receiving data. The main thread will not return.
However, if the setup of sockets fail, I want to return a error code to the other python script that executed the subprocess.
main.py
py3output = subprocess.check_output(['python3', 'py3.py'])
print('py3 said:' + str(py3output))
py3.py
def returnme():
return 10
returnme()
When I run this, it prints:
py3 said:b''
I am just trying to figure out how to get the return value back to the main calling program.
To return an exit code n back to the OS, you need sys.exit(n). But seems like you do not want to check the exit code but the stdout otput. So your program might need to rewrite to:
def returnme():
return 10
print(returnme())
You should only return a string as a standard output using following code:
sample.py
import sys
def returnme():
sys.stdout.write(str(10))
sys.stdout.flush()
returnme()
main.py
from subprocess import check_output
output = check_output(['python','sample.py'])
print('Sample.py says :' + output)
I cannot get fabric working when used as a library within my own python scripts. I made a very short example fabfile.py to demonstrate my problem:
#!/usr/bin/env python
from fabric.api import *
print("Hello")
def test():
with settings(host_string='myIp', user="myUser", password="myPassword"):
run("hostname")
if __name__ == '__main__':
test()
Running fab works like a charm:
$ fab test
Hello
[myIp] run: hostname
[myIp] out: ThisHost
[myIp] out:
Done.
Disconnecting from myUser#myIp... done.
Ok, now, running the python script without fab seems to break somewhere:
$ python fabfile.py
Hello
[myIp] run: hostname
It immediatly returns, so it does not even seem to wait for a response. Maybe there are errors, but I don't see how to output those.
I am running this script inside my vagrant virtual machine. As fab executes without any errors, I guess this should not be a problem!
UPDATE
The script seems to crash as it does not execute anything after the first run. local on the other hand works!
We executed the script on a co-workers laptop and it runs without any issues. I am using Python 2.6.5 on Ubuntu 10.04 with fabric 1.5.1, so I guess there is a problem with some of this! Is there any way to debug this properly?
I've experienced a similar issue, that the fab command exited without error but just a blank line on the first run()/sudo() command.
So I put the run() command into a try: except: block and printed the traceback:
def do_something():
print(green("Executing on %(host)s as %(user)s" % env))
try:
run("uname -a")
except:
import traceback
tb = traceback.format_exc()
print(tb)
I saw that it the script exited in the fabfile/network.py at line 419 when it caught an EOFError or TypeError. I modified the script to:
...
except (EOFError, TypeError) as err:
print err
# Print a newline (in case user was sitting at prompt)
print('')
sys.exit(0)
...
which then printed out:
connect() got an unexpected keyword argument 'sock'
So I remove the sock keyword argument in the connect method a few lines above and it worked like charm. I guess it is a problem with a paramiko version, that does not allow the sock keyword.
Versions:
Python 2.7.3
Fabric >= 1.5.3
paramiko 1.10.0
if you look at the fab command it looks like this:
sys.exit(
load_entry_point('Fabric==1.4.3', 'console_scripts', 'fab')()
)
this means it looks for a block labeled console_scripts in a file called entry_points.txt in the Fabric package and executes the methods listed there, in this case fabric.main:main
when we look at this method we see argument parsing, interesting fabfile importing and then:
if fabfile:
docstring, callables, default = load_fabfile(fabfile)
state.commands.update(callables)
....
for name, args, kwargs, arg_hosts, arg_roles, arg_exclude_hosts in commands_to_run:
execute(
name,
hosts=arg_hosts,
roles=arg_roles,
exclude_hosts=arg_exclude_hosts,
*args, **kwargs
)
with some experimentation we can come up with something like:
from fabric import state
from fabric.api import *
from fabric.tasks import execute
from fabric.network import disconnect_all
def test():
with settings(host_string='host', user="user", password="password"):
print run("hostname")
if __name__ == '__main__':
state.commands.update({'test': test})
execute("test")
if state.output.status:
print("\nDone.")
disconnect_all()
which is obviously very incomplete, but perhaps you only need to add the
disconnect_all()
line at the end of your script