How do I create an automated test for my python script? - python

I am fairly new to programming and currently working on a python script. It is supposed to gather all the files and directories that are given as paths inside the program and copy them to a new location that the user can choose as an input.
import shutil
import os
from pathlib import Path
import argparse
src = [ [insert name of destination directory, insert path of file/directory that
should be copied ]
]
x = input("Please choose a destination path\n>>>")
if not os.path.exists(x):
os.makedirs(x)
print("Directory was created")
else:
print("Existing directory was chosen")
dest = Path(x.strip())
for pfad in src:
if os.path.isdir(pfad[1]):
shutil.copytree(pfad[1], dest / pfad[0])
elif os.path.isfile(pfad[1]):
pfad1 = Path(dest / pfad[0])
if not os.path.exists(pfad1):
os.makedirs(pfad1)
shutil.copy(pfad[1], dest / pfad[0])
else:
print("An error occured")
print(pfad)
print("All files and directories have been copied!")
input()
The script itself is working just fine. The problem is that I want write a test that automatically test the code each time I push it to my GitLab repository. I have been browsing through the web for quite some time now but wasnt able to find a good explanation on how to approach creating a test for a script like this.
I would be extremely thankful for any kind of feedback or hints to helpful resources.

First, you should write a test that you can run in command line.
I suggest you use the argparse module to pass source and destination directories, so that you can run thescript.py source_dir dest_dir without human interaction.
Then, as you have a test you can run, you need to add a .gitlab-ci.yml to the root of the project so that you can use the gitlab CI.
If you never used the gitlab CI, you need to start here: https://docs.gitlab.com/ee/ci/quick_start/
After that, you'll be able to add a job to your .gitlab-ci.yml, so that a runner with python installed will run the test. If you don't understad the bold terms of the previous sentence, you need to understant Gitlab CI first.

Related

How to run Open Pose binary (.exe) from within a Python script?

I am making a body tracking application where I want to run Open Pose if the user chooses to track their body movements. The OpenPose binary file can be run like so:
bin\OpenPoseDemo.exe --write_json 'path\to\dump\output'
So, in my Python script, I want to have a line of code that would run Open Pose, instead of having to ask the user to manually run OpenPose by opening a separate command line window. For that, I have tried:
import os
os.popen(r"C:\path\to\bin\OpenPoseDemo.exe --write_json 'C:\path\to\dump\output'")
But this gives the following error:
Error:
Could not create directory: 'C:\Users\Admin\Documents\Openpose\. Status error = -1. Does the parent folder exist and/or do you have writing access to that path?
Which I guess means that OpenPose can be opened only by going inside the openpose directory where the bin subdirectory resides. So, I wrote a shell script containing this line:
bin\OpenPoseDemo.exe --write_json 'C:\path\to\dump\output'
and saved it as run_openpose_binary.sh in the openpose directory (i.e., the same directory where bin is located).
I then tried to run this shell script from within my Python script like so:
import subprocess
subprocess.call(['sh', r'C:\path\to\openpose\run_openpose_binary.sh'])
and this gives the following error:
FileNotFoundError: [WinError 2] The system cannot find the file specified
I also tried the following:
os.popen(r"C:\path\to\openpose\run_openpose_binary.sh")
and
os.system(r"C:\path\to\openpose\run_openpose_binary.sh")
These do not produce any error, but instead just pop up a blank window and closes.
So, my question is, how do I run the OpenPoseDemo.exe from within my Python script?
For your last method, you're missing the return value from os.popen, which is a pipe. So, what you need is something like:
# untested as I don't have access to a Windows system
import os
with os.popen(r"/full/path/to/sh C:/path/to/openpose/run_openpose_binary.sh") as p:
# pipes work like files
output_of_command = p.read().strip() # this is a string
or, if you want to future-proof yourself, the alternative is:
# untested as I don't have access to a Windows system
popen = subprocess.Popen([r'/full/path/to/sh.exe', r'/full/path/to/run_openpose_binary.sh')], stdin=subprocess.PIPE, stdout=subprocess.PIPE,encoding='utf-8')
stdout, stderr = popen.communicate(input='')
Leave a comment if you have further difficulty.
I've had to fight this battle several times and I've found a solution. It's likely not the most elegant solution but it does work, and I'll explain it using an example of how to run OpenPose on a video.
You've got your path to the openpose download and your path to the video, and from there it's a 3-line solution. First, change the current working directory to that openpose folder, and then create your command, then call subprocess.run (I tried using subprocess.call and that did not work. I did not try shell=False but I have heard it's a safer way to do so. I'll leave that up to you.)
import os
import subprocess
openpose_path = "C:\\Users\\me\\Desktop\\openpose-1.7.0-binaries-win64-gpu-python3.7-flir-3d_recommended\\openpose\\"
video_path = "C:\\Users\\me\\Desktop\\myvideo.mp4"
os.chdir(openpose_path)
command = "".join(["bin\\OpenPoseDemo.exe", " -video ", video_path])
subprocess.run(command, shell=True)

Python line by line execution

I couldn't fine solution for this question using search option so my question is:
I have a script that does the job but only for one file. Just to explain what`s going on here:
import sys
sys.path.append('C:\Program Files\FME\fmeobjects\python27')
import fmeobjects
runner = fmeobjects.FMEWorkspaceRunner()
workspace = 'C:\FME\Project_1.fmw'
parameters = {}
parameters['SourceDataset_ACAD'] ='C:\AutoCAD\Project_1.dwg'
parameters['DestDataset_OGCKML'] ='C:\Maps_KMZ\Project_1.kmz'
runner.runWithParameters(workspace, parameters)
try:
# Run Workspace with parameters set in above directory
runner.runWithParameters(workspace, parameters)
# or use promptRun to prompt for published parameters
#runner.promptRun(workspace)
except fmeobjects.FMEException as ex:
# Print out FME Exception if workspace failed
print ex.message
else:
#Tell user the workspace ran
print('The Workspace is ran successfully'.format(workspace))
runner = None
This script executes FMW file that does conversion from AutoCAD DWG (C:\AutoCAD) to KMZ file and stores it in C:\Maps_KMZ folder. Now, I need to do the same thing for about 20-ish FME files that are in the same source folder.
Is it possible to execute each file at the time and add specific time frame between two executions let`s say 2 minute pause between them, because I can not run 2 or more conversions at the same time, it would crash Windows.
Thank you very much for your help!
I suggest that you modify your script to use command line arguments. You can either use sys.argv directly for a very simple interface or the parseargs module for more complex options.
You can write the interface to accept individual files names or directory names. To traverse the files of a directory, look at os.walk().

Methods to avoid hard-coding file paths in Python

Working with scientific data, specifically climate data, I am constantly hard-coding paths to data directories in my Python code. Even if I were to write the most extensible code in the world, the hard-coded file paths prevent it from ever being truly portable. I also feel like having information about the file system of your machine coded in your programs could be security issue.
What solutions are out there for handling the configuration of paths in Python to avoid having to code them out explicitly?
One of the solution rely on using configuration files.
You can store all your path in a json file like so :
{
"base_path" : "/home/bob/base_folder",
"low_temp_area_path" : "/home/bob/base/folder/low_temp"
}
and then in your python code, you could just do :
import json
with open("conf.json") as json_conf :
CONF = json.load(json_conf)
and then you can use your path (or any configuration variable you like) like so :
print "The base path is {}".format(CONF["base_path"])
First off its always good practise to add a main function to go with each class to test that class or functions in the file. Along with this you determine the current working directory. This becomes incredibly important when running python from a cron job or from a directory that is not the current working directory. No JSON files or environment variables are then needed and you will obtain interoperation across Mac, RHEL and Debian distributions.
This is how you do it, and it will work on windows also if you use '\' instead of '/' (if that is even necessary, in your case).
if "__main__" == __name__:
workingDirectory = os.path.realpath(sys.argv[0])
As you can see when you run your command, the working directory is calculated if you provide a full path or relative path, meaning it will work in a cron job automatically.
After that if you want to work with data that is stored in the current directory use:
fileName = os.path.join( workingDirectory, './sub-folder-of-current-directory/filename.csv' )
fp = open( fileName,'r')
or in the case of the above working directory (parallel to your project directory):
fileName = os.path.join( workingDirectory, '../folder-at-same-level-as-my-project/filename.csv' )
fp = open( fileName,'r')
I believe there are many ways around this, but here is what I would do:
Create a JSON config file with all the paths I need defined.
For even more portability, I'd have a default path where I look for this config file but also have a command line input to change it.
In my opinion passing arguments from command line would be best solution. You should take a look at argparse . This allows you to create nice way to handle arguments from the command line. for example:
myDataScript.py /home/userName/datasource1/

how to use sphinx document generator in windows?

I want to make a site using sphinx document generator. But my machine is windows. I cant find sufficient resources for windows. If anyone could suggest me a way to implement sphinx in windows would be a great help.
Thanks.
Sphinx works just fine on Windows. To get started, go to the Quickstart tutorial and follow the instructions. One thing I would add is make sure you answer "y" to the question that asks if you want to separate the build and source folders. This will make things more simple later.
If you want to use apidoc (I think you do), then can use the command-line tools in the Scripts folder of your Python install. Or you can write your own script. Below is one I wrote to get .rst files for some target modules:
files = [u'C:\\Work\\Scripts\\Grapher\\both_pyplot.py',
u'C:\\Work\\Scripts\\Grapher\\colors_pyplot.py',
u'C:\\Work\\Scripts\\Grapher\\dataEditor.pyw',
u'C:\\Work\\Scripts\\Grapher\\grapher.pyw']
for d in ('pyfiles', 'rst_temp'):
try:
shutil.rmtree(d)
except WindowsError:
pass
os.mkdir(d)
#copy, rename .pyw files to .py so sphinx will pick them up
for fn in files:
fn2 = fn
if fn.lower().endswith('.pyw'):
fn2 = fn[:-1]
shutil.copy2(fn, os.path.join('pyfiles', os.path.basename(fn2)))
#now send to apidoc
lst = [fn, '-o', 'rst_temp', 'pyfiles']
from sphinx.apidoc import main
main(lst)
msg = ('Now copy the rst files you want documentation for from the '
'"rst_temp" dir to the the "source" dir, edit the index.html file '
'in the "source" dir, and run builder.py')
print msg
The apidoc extension does not recognize .pyw files, so this script copies target modules to a temporary location and renames them with a .py extension so apidoc can use them.
To build your project, you can run the make.bat file in your project folder (created when quickstart runs) or you can write your own script. Here's a sample (builder.py):
import sys, os
fn = __file__
sys.path.append(os.path.normpath('C:\\Work\\Scripts\\Grapher'))
lst = [fn, '-b', 'html', 'source', 'build']
from sphinx import main
main(lst)

Python. Unchroot directory

I chrooted directory using following commands:
os.chroot("/mydir")
How to return to directory to previous - before chrooting?
Maybe it is possible to unchroot directory?
SOLUTION:
Thanks to Phihag. I found a solution. Simple example:
import os
os.mkdir('/tmp/new_dir')
dir1 = os.open('.', os.O_RDONLY)
dir2 = os.open('/tmp/new_dir', os.O_RDONLY)
os.getcwd() # we are in 'tmp'
os.chroot('/tmp/new_dir') # chrooting 'new_dir' directory
os.fchdir(dir2)
os.getcwd() # we are in chrooted directory, but path is '/'. It's OK.
os.fchdir(dir1)
os.getcwd() # we came back to not chrooted 'tmp' directory
os.close(dir1)
os.close(dir2)
More info
If you haven't changed your current working directory, you can simply call
os.chroot('../..') # Add '../' as needed
Of course, this requires the CAP_SYS_CHROOT capability (usually only given to root).
If you have changed your working directory, you can still escape, but it's harder:
os.mkdir('tmp')
os.chroot('tmp')
os.chdir('../../') # Add '../' as needed
os.chroot('.')
If chroot changes the current working directory, you can get around that by opening the directory, and using fchdir to go back.
Of course, if you intend to go out of a chroot in the course of a normal program (i.e. not a demonstration or security exploit), you should rethink your program. First of all, do you really need to escape the chroot? Why can't you just copy the required info into it beforehand?
Also, consider using a second process that stays outside of the chroot and answers to the requests of the chrooted one.

Categories

Resources