Python Read the Device Manager Information - python

I just need to read all of the information that is listed in Device Manager with a python 2.7 script. Especially the information under 'IDE ATA/ATAPI controllers' subcategory. This is needed to detect whether the SATA drives are under AHCI or IDE mode...

My way is not perfect, but that is good solution so far for me, just for you reference. Through the devcon.exe which is in WDK(Windows Dev... Kit), and my code as below.
try:
output = subprocess.Popen(["devcon","status","*"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) #useing comma "," not space " " in Popen[]. replace the key word "*" to what you want to search.
stdout, stderr = output.communicate()
print "output: \n", output
print "stdout: \n", stdout # output here if ok.
print "stderr: \n", stderr # output if no arg
except subprocess.CalledProcessError:
print('Exception handled')

One easy way (on Windows) is to use Windows Device Manager's API. There is a Python binding here.
After installing the package, the code bellow will do fine:
from infi.devicemanager import DeviceManager
dm = DeviceManager()
dm.root.rescan()
devices = dm.all_devices
for device in devices:
print(device)

Related

Script to capture everything on screen

So I have this python3 script that does a lot of automated testing for me, it takes roughly 20 minutes to run, and some user interaction is required. It also uses paramiko to ssh to a remote host for a separate test.
Eventually, I would like to hand this script over to the rest of my team however, it has one feature missing: evidence collection!
I need to capture everything that appears on the terminal to a file. I have been experimenting with the Linux command 'script'. However, I cannot find an automated method of starting script, and executing the script.
I have a command in /usr/bin/
script log_name;python3.5 /home/centos/scripts/test.py
When I run my command, it just stalls. Any help would be greatly appreciated!
Thanks :)
Is a redirection of the output to a file what you need ?
python3.5 /home/centos/scripts/test.py > output.log 2>&1
Or if you want to keep the output on the terminal AND save it into a file:
python3.5 /home/centos/scripts/test.py 2>&1 | tee output.log
I needed to do this, and ended up with a solution that combined pexpect and ttyrec.
ttyrec produces output files that can be played back with a few different player applications - I use TermTV and IPBT.
If memory serves, I had to use pexpect to launch ttyrec (as well as my test's other commands) because I was using Jenkins to schedule the execution of my test, and pexpect seemed to be the easiest way to get a working interactive shell in a Jenkins job.
In your situation you might be able to get away with using just ttyrec, and skip the pexpect step - try running ttyrec -e command as mentioned in the ttyrec docs.
Finally, on the topic of interactive shells, there's an alternative to pexpect named "empty" that I've had some success with too - see http://empty.sourceforge.net/. If you're running Ubuntu or Debian you can install empty with apt-get install empty-expect
I actually managed to do it in python3, took a lot of work, but here is the python solution:
def record_log(output):
try:
with open(LOG_RUN_OUTPUT, 'a') as file:
file.write(output)
except:
with open(LOG_RUN_OUTPUT, 'w') as file:
file.write(output)
def execute(cmd, store=True):
proc = Popen(cmd.encode("utf8"), shell=True, stdout=PIPE, stderr=PIPE)
output = "\n".join((out.decode()for out in proc.communicate()))
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = template % (cmd, output)
print(output)
if store:
record_log(output)
return output
# SSH function
def ssh_connect(start_message, host_id, user_name, key, stage_commands):
print(start_message)
try:
ssh.connect(hostname=host_id, username=user_name, key_filename=key, timeout=120)
except:
print("Failed to connect to " + host_id)
for command in stage_commands:
try:
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
except:
input("Paused, because " + command + " failed to run.\n Please verify and press enter to continue.")
else:
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = ssh_stderr.read() + ssh_stdout.read()
output = template % (command, output)
record_log(output)
print(output)

How to run a shell script without having to press enter/confirm s.th. inbetween

I'm currently writing a shell script which is interfacing with numerous python scripts. In one of these Python scripts I'm calling grass without starting it explicitly. When I run my shell script I have to hit enter at the point where I call grass (this is the code I got from the official working with grass page):
startcmd = grass7bin + ' -c ' + file_in2 + ' -e ' + location_path
print startcmd
p = subprocess.Popen(startcmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if p.returncode != 0:
print >>sys.stderr, 'ERROR: %s' % err
print >>sys.stderr, 'ERROR: Cannot generate location (%s)' % startcmd
sys.exit(-1)
else:
print 'Created location %s' % location_path
gsetup.init(gisbase, gisdb, location, mapset)
My problem is that I want this process to run automatically without me having to press enter everytime in between!
I have already tried numerous options such as pexpect, uinput (doesn't work that well because of problems with the module). I know that in windows you have the msvcrt module, but I am working with linux... any ideas how to solve this problem?
Use the pexpect library for expect functionnality.
Here's an example of interaction with a an application requiring user to type in his password:
child = pexpect.spawn('your command')
child.expect('Enter password:')
child.sendline('your password')
child.expect(pexpect.EOF, timeout=None)
cmd_show_data = child.before
cmd_output = cmd_show_data.split('\r\n')
for data in cmd_output:
print data
I finally found an easy and fast way for simulating a key press:
just install xdotool and then use the following code for simulating e.g. the enter key:
import subprocess
subprocess.call(["xdotool","key","Return"])

The filename, directory name, or volume label syntax is incorrect

I have a simple python (2.7) script that should execute few svn commands:
def getStatusOutput(cmd):
print cmd
p = subprocess.Popen([cmd],stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
output, status = p.communicate()
return status, output
svn_cmd = [
["svn co " + FIRMWARE_URL + "/branches/interfaces/ interfaces --depth empty", ""],
["svn up interfaces/actual_ver.txt", " Getting current version of a branch "]
]
status, output = getStatusOutput(svn_cmd[0][0])
Unfortunately when it is run on my friends machine it fails with error: "The filename, directory name, or volume label syntax is incorrect."
When I run this on my machine it works fine.
If I change:
status, output = getStatusOutput(svn_cmd[0][0])
to
status, output = getStatusOutput(svn_cmd[0])
Then it will successfully execute first element of array (command), but then will fail on second (comment). Does anyone have any idea what can be wrong?
Solution was easier then I thought. Problem was here:
p = subprocess.Popen([cmd],stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
and exactly [cmd] should be without [ ]. Otherwise element will be treated as a array not as string.
Hope this will help to someone.
I have a similar code which executes fine on Linux but fails on Windows
It works if I use shlex.split()
import shlex
CMD="your command"
cmdList=shlex.split(CMD)
proc = subprocess.Popen(cmdList,stdout=subprocess.PIPE, stderr=subprocess.PIPE,shell=True
(out, err) = proc.communicate()
print err

Use wget from python with Popen

I am writing a python (2.7) script that checks if some files are missing and downloads them via wget. Everything works fine, but after the download has finished and the script should exit, the bash (where I started the python script from) is not showing up correctly.
I have the cursor and can enter things, but the standard prompt is not showing up. I have to resize the terminal window to make the prompt display correctly. What might be the reason for this?
tilenames = ['File1', 'File2', ...]
web_url = http://...
for t in tilenames:
try:
open(t, 'r')
except IOError:
print 'file %s not found.' % (t)
command = ['wget', '-P', './SRTM/', web_url + t ]
output = Popen(command, stdout=subprocess.PIPE)
print "Done"
I think it has something to do with the way the wget process is invoked. The last command print "Done" is actually done before wget writes all of its output into the shell.
Just add a .communicate() after output, like this:
tilenames = ['File1', 'File2', ...]
web_url = http://...
for t in tilenames:
try:
open(t, 'r')
except IOError:
print 'file %s not found.' % (t)
command = ['wget', '-P', './SRTM/', web_url + t ]
p = Popen(command, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
print "Done"
communicate will return the output written to stdout and None for stderr, because it's not forwarded to a PIPE (you will see it on the terminal instead).
Btw. you should close opened file objects (to check if a file exists you can use the functions in os.path e.g. os.path.exists)
wget writes its statistics to stderr, which is why it scrambles your terminal. stdout and stderr are flushed and queried at different intervals, so it is possible that your Done shows up before the output from wget.
A fix would be to call wget with -q or to also redirect stderr using stderr=open("/dev/null", "w") or something similar.
Additionally, you should probably use .communicate() to avoid pipe issues.
You could use os.system (but see http://docs.python.org/release/2.5.2/lib/node536.html). Basically Popen is intended to ALLOW your python process to read from the command output. You don't seem to need to do that, so the fragment below should get you what you want:
import os
import subprocess
p = subprocess.Popen(['wget','http://www.aol.com'],stdout=subprocess.PIPE)
os.waitpid(p.pid,0)
print "done"
If you add the -q option to wget it works too (quite mode)

how to get console output from a remote computer (ssh + python)

I have googled "python ssh". There is a wonderful module pexpect, which can access a remote computer using ssh (with password).
After the remote computer is connected, I can execute other commands. However I cannot get the result in python again.
p = pexpect.spawn("ssh user#remote_computer")
print "connecting..."
p.waitnoecho()
p.sendline(my_password)
print "connected"
p.sendline("ps -ef")
p.expect(pexpect.EOF) # this will take very long time
print p.before
How to get the result of ps -ef in my case?
Have you tried an even simpler approach?
>>> from subprocess import Popen, PIPE
>>> stdout, stderr = Popen(['ssh', 'user#remote_computer', 'ps -ef'],
... stdout=PIPE).communicate()
>>> print(stdout)
Granted, this only works because I have ssh-agent running preloaded with a private key that the remote host knows about.
child = pexpect.spawn("ssh user#remote_computer ps -ef")
print "connecting..."
i = child.expect(['user#remote_computer\'s password:'])
child.sendline(user_password)
i = child.expect([' .*']) #or use i = child.expect([pexpect.EOF])
if i == 0:
print child.after # uncomment when using [' .*'] pattern
#print child.before # uncomment when using EOF pattern
else:
print "Unable to capture output"
Hope this help..
You might also want to investigate paramiko which is another SSH library for Python.
Try to send
p.sendline("ps -ef\n")
IIRC, the text you send is interpreted verbatim, so the other computer is probably waiting for you to complete the command.

Categories

Resources