Running .vbs scipt inside Python doesn't do anything - python

Idea
Basically, what my script does is checking C:/SOURCE for .txt files and add a timestamp to it. To replicate it you can basically make that folder and put some txt files in there. Then, it's supposed to run a .vbs file, which then runs a .bat files with some rclone commands which don't matter here. I did it like this because there wont be a CMD window opening when running the rclone command through the .vbs file.
Python code
import time, os, subprocess
while True:
print("Beginning checkup")
print("=================")
timestamp = time.strftime('%d_%m_%H_%M') # only underscores: no naming issues
the_dir = "C:/SOURCE"
for fname in os.listdir(the_dir):
if fname.lower().endswith(".txt"):
print("found " + fname)
time.sleep(0.1)
new_name = "{}-{}.txt".format(os.path.splitext(fname)[0], timestamp)
os.rename(os.path.join(the_dir, fname), os.path.join(the_dir, new_name))
time.sleep(0.5)
else:
subprocess.call(['cscript.exe', "copy.vbs"])
time.sleep(60)
VBScript code
Set WshShell = CreateObject("WScript.Shell" )
WshShell.Run Chr(34) & "copy.bat" & Chr(34), 0
Set WshShell = Nothing
The only important part for the Python script is below the very last else, where the subprocess.call() is supposed to run the .vbs file. What happens when running the script is it shows the first two lines that always come up when running CMD, but then nothing.
How could I fix that? I tried:
subprocess.call("cscript copy.vbs")
subprocess.call("cmd /c copy.vbs")
both with the same outcome, it doesn't do anything.
Anyone have an idea?

Why are you invoking a VBScript to invoke a batch script from Python? You should be able to simple run whatever the batch script is doing directly from your Python code. But even if you wanted to keep the batch script, something like this should do just fine without VBScript as an intermediary.
subprocess.call(['cmd', '/c', 'copy.bat'])
You may want to give the full path of the batch file, though, to avoid issues like the working directory not being what you think it is.
If your batch script resides in the same directory as the Python script, you can build the path with something like this:
import os
import subprocess
scriptdir = os.path.dirname(__file__)
batchfile = os.path.join(scriptdir, 'copy.bat')
subprocess.call(['cmd', '/c', os.path.realpath(batchfile)])

It seems there is no such an operation that could not be done using plain Python. Scan a directory, copy a file -- Python has it all in the standard library. See os.path and shutil modules.
Adding VB scripts and launching subprocesses make your code complex and difficult to debug.

Related

Problem with broken backup and python script

Right up front to be clear, I am not fluent in programming or python, but generally can accomplish what I need to with some research. Please excuse any bad formatting structure, as this is my first post to a board like this
I recently updated my laptop from Ubuntu 18.04 to 20.04. I created a full system backup with Dejadup, which due to a missing file, could not be restored. Research brought me to post on here from 2019 for manually restoring these files. The process called for 2 scripts, 1 to unpack and the second to reconstruct the files, both created by Hamish Downer.
The first,
"for f in duplicity-full.*.difftar.gz; do echo "$f"; tar xf "$f"; done"
seemed to work well and did unpack the files.
The second,
#!/usr/bin/env python3
import argparse
from pathlib import Path
import shutil
import sys"
is the start of a re-constructor script. Using terminal from within the directory I am trying to rebuild I enter the first line and return.
When I enter the second line of code the terminal just "hangs" with no activity, and will only come back to the prompt if I double click the cursor. I receive no errors or warnings. When I enter the third line of code
"from pathlib import Path"
and return I then get an error
from: can't read /var/mail/pathlib
The problem seems to originate with the "import argparse" command and I assume is due to a symlink.
argparse is located in /usr/local/lib/python3.8/dist-packages (1.4.0)
python3 is located in /usr/bin/
Python came with the Ubuntu 20.04 distribution package.
Any help with reconstructing these files would be greatly appreciated, especially in a batch as this script is meant to do versus trying to do them one file at a time.
Update: I have tried adding the "re-constructor" part of this script without success. This is a link to the script I want to use:
https://askubuntu.com/questions/1123058/extract-unencrypted-duplicity-backup-when-all-sigtar-and-most-manifest-files-are
Re-constructor script:
class FileReconstructor():
def __init__(self, unpacked_dir, restore_dir):
self.unpacked_path = Path(unpacked_dir).resolve()
self.restore_path = Path(restore_dir).resolve()
def reconstruct_files(self):
for leaf_dir in self.walk_unpacked_leaf_dirs():
target_path = self.target_path(leaf_dir)
target_path.parent.mkdir(parents=True, exist_ok=True)
with target_path.open('wb') as target_file:
self.copy_file_parts_to(target_file, leaf_dir)
def copy_file_parts_to(self, target_file, leaf_dir):
file_parts = sorted(leaf_dir.iterdir(), key=lambda x: int(x.name))
for file_part in file_parts:
with file_part.open('rb') as source_file:
shutil.copyfileobj(source_file, target_file)
def walk_unpacked_leaf_dirs(self):
"""
based on the assumption that all leaf files are named as numbers
"""
seen_dirs = set()
for path in self.unpacked_path.rglob('*'):
if path.is_file():
if path.parent not in seen_dirs:
seen_dirs.add(path.parent)
yield path.parent
def target_path(self, leaf_dir_path):
return self.restore_path / leaf_dir_path.relative_to(self.unpacked_path)
def parse_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument(
'unpacked_dir',
help='The directory with the unpacked tar files',
)
parser.add_argument(
'restore_dir',
help='The directory to restore files into',
)
return parser.parse_args(argv)
def main(argv):
args = parse_args(argv)
reconstuctor = FileReconstructor(args.media/jerry/ubuntu, args.media/jerry/Restored)
return reconstuctor.reconstruct_files()
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
I think you are typing the commands to the shell instead of python interpreter. Please check your prompt, python (started with python3) has >>>.
Linux has an import command (part of the ImageMagick) and understands import argparse, but it does something completely different.
import - saves any visible window on an X server and outputs it as an
image file. You can capture a single window, the entire screen, or any
rectangular portion of the screen.
This matches the described behaviour. import waits for a mouse click and then creates a large output file. Check if there is a new file named argparse.
An executable script contains instruction to be processed by an interpreter and there are many possible interpreters, several shells (bash and alternatives), languages like Perl, Python, etc. and also some very specialized like nft for firewall rules.
If you execute a script from the command line, the shell reads its first line. If it starts with #! characters (called "shebang"), it uses the program listed on that line. (note: /usr/bin/env there is just a helper to find the exact location of a program).
But if you want to use an interpreter interactively, you need to start it explicitly. The shebang line has no special meaning in this situation, only as the very first line of a script. Otherwise it is just a comment and is ignored.

How to run Open Pose binary (.exe) from within a Python script?

I am making a body tracking application where I want to run Open Pose if the user chooses to track their body movements. The OpenPose binary file can be run like so:
bin\OpenPoseDemo.exe --write_json 'path\to\dump\output'
So, in my Python script, I want to have a line of code that would run Open Pose, instead of having to ask the user to manually run OpenPose by opening a separate command line window. For that, I have tried:
import os
os.popen(r"C:\path\to\bin\OpenPoseDemo.exe --write_json 'C:\path\to\dump\output'")
But this gives the following error:
Error:
Could not create directory: 'C:\Users\Admin\Documents\Openpose\. Status error = -1. Does the parent folder exist and/or do you have writing access to that path?
Which I guess means that OpenPose can be opened only by going inside the openpose directory where the bin subdirectory resides. So, I wrote a shell script containing this line:
bin\OpenPoseDemo.exe --write_json 'C:\path\to\dump\output'
and saved it as run_openpose_binary.sh in the openpose directory (i.e., the same directory where bin is located).
I then tried to run this shell script from within my Python script like so:
import subprocess
subprocess.call(['sh', r'C:\path\to\openpose\run_openpose_binary.sh'])
and this gives the following error:
FileNotFoundError: [WinError 2] The system cannot find the file specified
I also tried the following:
os.popen(r"C:\path\to\openpose\run_openpose_binary.sh")
and
os.system(r"C:\path\to\openpose\run_openpose_binary.sh")
These do not produce any error, but instead just pop up a blank window and closes.
So, my question is, how do I run the OpenPoseDemo.exe from within my Python script?
For your last method, you're missing the return value from os.popen, which is a pipe. So, what you need is something like:
# untested as I don't have access to a Windows system
import os
with os.popen(r"/full/path/to/sh C:/path/to/openpose/run_openpose_binary.sh") as p:
# pipes work like files
output_of_command = p.read().strip() # this is a string
or, if you want to future-proof yourself, the alternative is:
# untested as I don't have access to a Windows system
popen = subprocess.Popen([r'/full/path/to/sh.exe', r'/full/path/to/run_openpose_binary.sh')], stdin=subprocess.PIPE, stdout=subprocess.PIPE,encoding='utf-8')
stdout, stderr = popen.communicate(input='')
Leave a comment if you have further difficulty.
I've had to fight this battle several times and I've found a solution. It's likely not the most elegant solution but it does work, and I'll explain it using an example of how to run OpenPose on a video.
You've got your path to the openpose download and your path to the video, and from there it's a 3-line solution. First, change the current working directory to that openpose folder, and then create your command, then call subprocess.run (I tried using subprocess.call and that did not work. I did not try shell=False but I have heard it's a safer way to do so. I'll leave that up to you.)
import os
import subprocess
openpose_path = "C:\\Users\\me\\Desktop\\openpose-1.7.0-binaries-win64-gpu-python3.7-flir-3d_recommended\\openpose\\"
video_path = "C:\\Users\\me\\Desktop\\myvideo.mp4"
os.chdir(openpose_path)
command = "".join(["bin\\OpenPoseDemo.exe", " -video ", video_path])
subprocess.run(command, shell=True)

Can a (Python) Script permanently Edit my system Path variable?

I'm trying to make a script that's going to do some stuff via command line. Its usage should look somewhat like this:
C:\users\me> appname startprocess
but if I'm trying to release this, I need it to be really handy for everyone to use and that needs the PATH variable to be set to my script's location.
I want my script to handle this task by itself. Can my script edit my PATH variable permanently? If so, how?
Most questions on this topic are mostly about how to set the path variable for python scripts instead of writing script that could do that by itself.
For adding the current working directory to the PATH, you can use this piece of code:
import os
import sys
pwd = os.getcwd()
sys.path.append(pwd)
And for running a shell script, use this:
import subprocess
subprocess.run('Your Command', shell = True)
If you want to check the output of the command:
stdout = subprocess.check_output('Your Command', shell = True)
print(stdout.decode())

run maya from python shell

So I have hundreds of maya files that have to be run with one script. So I was thinking why do I even have to bother opening maya, I should be able to do it from python shell (not the python shell in maya, python shell in windows)
So the idea is:
fileList = ["....my huge list of files...."]
for f in fileList:
openMaya
runMyAwesomeScript
I found this:
C:\Program Files\Autodesk\Maya201x\bin\mayapy.exe
maya.standalone.initialize()
And it looks like it loads sth, because I can see my scripts loading from custom paths. However it does not make the maya.exe run.
Any help is welcome since I never did this kind of maya python external things.
P.S. Using maya 2015 and python 2.7.3
You are on the right track. Maya.standalone runs a headless, non-gui versions of Maya so it's ideal for batching, but it is essentially a command line app. Apart from lacking GUI it is the same as regular session, so you'll have the same python path and
You'll want to design your batch process so it doesn't need any UI interactions (so, for example, you want to make sure you are saving or exporting things in a way that does not throw dialogs at the user).
If you just want a commandline-only maya, this will let you run an session interactively:
mayapy.exe -i -c "import maya.standalone; maya.standalone.initialize()"
If you have a script to run instead, include import maya.standalone and maya.standalone.initialize() at the top and then whatever work you want to do. Then run it from the command line like this:
mayapy.exe "path/to/script.py"
Presumably you'd want to include a list of files to process in that script and have it just chew through them one at a time. Something like this:
import maya.standalone
maya.standalone.initialize()
import maya.cmds as cmds
import traceback
files = ['path/to/file1.ma'. '/path/to/file2.ma'.....]
succeeded, failed = {}
for eachfile in files:
cmds.file(eachfile, open=True, force=True)
try:
# real work goes here, this is dummy
cmds.polyCube()
cmds.file(save=True)
succeeded[eachfile] = True
except:
failed[eachfile] = traceback.format_exc()
print "Processed %i files" % len(files)
print "succeeded:"
for item in succeeded:
print "\t", item
print "failed:"
for item, reason in failed.items():
print "\t", item
print "\t", reason
which should do some operation on a bunch of files and report which ones succeed and which fail for what reason

Running python script within a shell script: files don't save

I am very new to shell scripting, so I'm still figuring things out. Here is my problem:
I have a python .py executable file which creates multiple files and saves them to a directory. I need to run that file in a shell script. For some reason, the shell script executes the python script but no new files appear in my directory. When I just run the .py file, everything works fine
Here's what my shell script looks like:
#!/bin/bash
cd /home/usr/directory
python myfile.py
Within my python script, the files that are saved are pickled object instances. So every one of them looks something like this:
f = file('/home/usr/anotherdirectory/myfile.p','w')
pickle.dump(myObject,f)
f.close()
This line:
f = file('/home/usr/directory/myfile.p','w')
Should be:
f = open('/home/usr/directory/myfile.p','wb+')
For best practices it should be done like this:
with open('/home/usr/directory/myfile.p','wb+') as fs:
pickle.dump(myObject, fs)
The documentation for the file function states:
When opening a file, it’s preferable to use open() instead of invoking this constructor directly.
Problems like this may be one of the reasons why. Try changing
f = file('/home/usr/directory/myfile.p','w')
to
f = open('/home/usr/directory/myfile.p','w')

Categories

Resources