I was giving a try to the optimization process of the Backtrader library. I see that the code run pretty well with multi-core CPU. It took around 22.352761494772228 second for the complete optimization process. But could be even faster if worked with GPU.
Hence, I would like to know how I can run the following with GPU:
#!/usr/bin/env python
# -*- coding: utf-8; py-indent-offset:4 -*-
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import argparse
import datetime
import time
from backtrader.utils.py3 import range
import backtrader as bt
import backtrader.indicators as btind
import backtrader.feeds as btfeeds
class OptimizeStrategy(bt.Strategy):
params = (('smaperiod', 15),
('macdperiod1', 12),
('macdperiod2', 26),
('macdperiod3', 9),
)
def __init__(self):
# Add indicators to add load
btind.SMA(period=self.p.smaperiod)
btind.MACD(period_me1=self.p.macdperiod1,
period_me2=self.p.macdperiod2,
period_signal=self.p.macdperiod3)
def runstrat():
args = parse_args()
# Create a cerebro entity
cerebro = bt.Cerebro(maxcpus=args.maxcpus,
runonce=not args.no_runonce,
exactbars=args.exactbars,
optdatas=not args.no_optdatas,
optreturn=not args.no_optreturn)
# Add a strategy
cerebro.optstrategy(
OptimizeStrategy,
smaperiod=range(args.ma_low, args.ma_high),
macdperiod1=range(args.m1_low, args.m1_high),
macdperiod2=range(args.m2_low, args.m2_high),
macdperiod3=range(args.m3_low, args.m3_high),
)
# Get the dates from the args
fromdate = datetime.datetime.strptime(args.fromdate, '%Y-%m-%d')
todate = datetime.datetime.strptime(args.todate, '%Y-%m-%d')
# Create the 1st data
data = btfeeds.BacktraderCSVData(
dataname=args.data,
fromdate=fromdate,
todate=todate)
# Add the Data Feed to Cerebro
cerebro.adddata(data)
# clock the start of the process
tstart = time.clock()
# Run over everything
stratruns = cerebro.run()
# clock the end of the process
tend = time.clock()
print('==================================================')
for stratrun in stratruns:
print('**************************************************')
for strat in stratrun:
print('--------------------------------------------------')
print(strat.p._getkwargs())
print('==================================================')
# print out the result
print('Time used:', str(tend - tstart))
def parse_args():
parser = argparse.ArgumentParser(
description='Optimization',
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
'--data', '-d',
default='2006-day-001.txt',
help='data to add to the system')
parser.add_argument(
'--fromdate', '-f',
default='2006-01-01',
help='Starting date in YYYY-MM-DD format')
parser.add_argument(
'--todate', '-t',
default='2006-12-31',
help='Starting date in YYYY-MM-DD format')
parser.add_argument(
'--maxcpus', '-m',
type=int, required=False, default=0,
help=('Number of CPUs to use in the optimization'
'\n'
' - 0 (default): use all available CPUs\n'
' - 1 -> n: use as many as specified\n'))
parser.add_argument(
'--no-runonce', action='store_true', required=False,
help='Run in next mode')
parser.add_argument(
'--exactbars', required=False, type=int, default=0,
help=('Use the specified exactbars still compatible with preload\n'
' 0 No memory savings\n'
' -1 Moderate memory savings\n'
' -2 Less moderate memory savings\n'))
parser.add_argument(
'--no-optdatas', action='store_true', required=False,
help='Do not optimize data preloading in optimization')
parser.add_argument(
'--no-optreturn', action='store_true', required=False,
help='Do not optimize the returned values to save time')
parser.add_argument(
'--ma_low', type=int,
default=10, required=False,
help='SMA range low to optimize')
parser.add_argument(
'--ma_high', type=int,
default=30, required=False,
help='SMA range high to optimize')
parser.add_argument(
'--m1_low', type=int,
default=12, required=False,
help='MACD Fast MA range low to optimize')
parser.add_argument(
'--m1_high', type=int,
default=20, required=False,
help='MACD Fast MA range high to optimize')
parser.add_argument(
'--m2_low', type=int,
default=26, required=False,
help='MACD Slow MA range low to optimize')
parser.add_argument(
'--m2_high', type=int,
default=30, required=False,
help='MACD Slow MA range high to optimize')
parser.add_argument(
'--m3_low', type=int,
default=9, required=False,
help='MACD Signal range low to optimize')
parser.add_argument(
'--m3_high', type=int,
default=15, required=False,
help='MACD Signal range high to optimize')
return parser.parse_args()
if __name__ == '__main__':
runstrat()
The sample data used is here: Sample Data file
Let me know what improment I can make. I thought of using numba or Pycuda or PyOpenCL
I don't think this is possible for Backtrader as mentioned in their FAQs
Optimization is slow and consumes RAM
Indeed. Parameter overfitting is also a great and formidable enemy of algorithmic trading.
I want to use the GPU for optimization
Nice idea, but the multiprocessing module in Python won't do that.
https://community.backtrader.com/topic/381/faq
Cuda and all can be used in cases where you want to offload a computation to the GPU. Like below example
import numpy as np
from timeit import default_timer as timer
from numba import vectorize
#vectorize(['float32(float32, float32)'], target='cuda')
def pow(a, b):
return a ** b
def main():
vec_size = 100000000
a = b = np.array(np.random.sample(vec_size), dtype=np.float32)
c = np.zeros(vec_size, dtype=np.float32)
start = timer()
c = pow(a, b)
duration = timer() - start
print(duration)
if __name__ == '__main__':
main()
PS: Example credits to https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01
It cannot be done without rewriting the framework (as opposed to rewriting your code) One of the main goals of the platform was to be pure Python and only use packages which are in the standard library.
Reference: myself (parent of the beast)
The question has been asked before and that's why there is a explicit mention in the FAQ.
Related
I know how to execute it via the command line, but looking for input to execute inside another python script. I tried passing the arguments as main(-- score-thr 0.7, --show, vid, config_file, chkpnt_file). Not sure that's how to do it though. Thank you in advance for helping out!
import argparse
import cv2
import mmcv
from mmdet.apis import inference_detector, init_detector
def parse_args():
parser = argparse.ArgumentParser(description='MMDetection video demo')
parser.add_argument('video', help='Video file')
parser.add_argument('config', help='Config file')
parser.add_argument('checkpoint', help='Checkpoint file')
parser.add_argument(
'--device', default='cuda:0', help='Device used for inference')
parser.add_argument(
'--score-thr', type=float, default=0.7, help='Bbox score threshold')
parser.add_argument('--out', type=str, help='Output video file')
parser.add_argument('--show', action='store_true', help='Show video')
parser.add_argument(
'--wait-time',
type=float,
default=1,
help='The interval of show (s), 0 is block')
args = parser.parse_args()
return args
def main():
args = parse_args()
assert args.out or args.show, \
('Please specify at least one operation (save/show the '
'video) with the argument "--out" or "--show"')
model = init_detector(args.config, args.checkpoint, device=args.device)
video_reader = mmcv.VideoReader(args.video)
video_writer = None
if args.out:
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
video_writer = cv2.VideoWriter(
args.out, fourcc, video_reader.fps,
(video_reader.width, video_reader.height))
for frame in mmcv.track_iter_progress(video_reader):
result = inference_detector(model, frame)
frame = model.show_result(frame, result, score_thr=args.score_thr)
if args.show:
cv2.namedWindow('video', 0)
mmcv.imshow(frame, 'video', args.wait_time)
if args.out:
video_writer.write(frame)
if video_writer:
video_writer.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
main() gets its arguments from sys.argv when it uses argparse. So you need to fill that in before calling it.
sys.argv = ['progname.py', '--score-thr', '0.7', '--show', vid, config_file, chkpnt_file]
main()
I'm trying to use IrNet using a github repository. In the instructions it says I need to run a command python run_sample.py to make it work. I have obviously cloned the repo into my google colab but then after running this command line it throws me an error run_sample.py: error: the following arguments are required: --voc12_root. After checking this run_sample.py I can see that the program requires a path parser.add_argument("--voc12_root", required=True, type=str, help="Path to VOC 2012 Devkit, must contain ./JPEGImages as subdirectory."), however I simply do not know how to satisfy this requirement. Where and how to paste it?
I'm trying to use this repo: https://github.com/jiwoon-ahn/irn
Here is the program I'm trying to run:
import argparse
import os
from misc import pyutils
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Environment
parser.add_argument("--num_workers", default=os.cpu_count()//2, type=int)
parser.add_argument("--voc12_root", required=True, type=str,
help="Path to VOC 2012 Devkit, must contain ./JPEGImages as subdirectory.")
# Dataset
parser.add_argument("--train_list", default="voc12/train_aug.txt", type=str)
parser.add_argument("--val_list", default="voc12/val.txt", type=str)
parser.add_argument("--infer_list", default="voc12/train.txt", type=str,
help="voc12/train_aug.txt to train a fully supervised model, "
"voc12/train.txt or voc12/val.txt to quickly check the quality of the labels.")
parser.add_argument("--chainer_eval_set", default="train", type=str)
# Class Activation Map
parser.add_argument("--cam_network", default="net.resnet50_cam", type=str)
parser.add_argument("--cam_crop_size", default=512, type=int)
parser.add_argument("--cam_batch_size", default=16, type=int)
parser.add_argument("--cam_num_epoches", default=5, type=int)
parser.add_argument("--cam_learning_rate", default=0.1, type=float)
parser.add_argument("--cam_weight_decay", default=1e-4, type=float)
parser.add_argument("--cam_eval_thres", default=0.15, type=float)
parser.add_argument("--cam_scales", default=(1.0, 0.5, 1.5, 2.0),
help="Multi-scale inferences")
# Mining Inter-pixel Relations
parser.add_argument("--conf_fg_thres", default=0.30, type=float)
parser.add_argument("--conf_bg_thres", default=0.05, type=float)
# Inter-pixel Relation Network (IRNet)
parser.add_argument("--irn_network", default="net.resnet50_irn", type=str)
parser.add_argument("--irn_crop_size", default=512, type=int)
parser.add_argument("--irn_batch_size", default=32, type=int)
parser.add_argument("--irn_num_epoches", default=3, type=int)
parser.add_argument("--irn_learning_rate", default=0.1, type=float)
parser.add_argument("--irn_weight_decay", default=1e-4, type=float)
# Random Walk Params
parser.add_argument("--beta", default=10)
parser.add_argument("--exp_times", default=8,
help="Hyper-parameter that controls the number of random walk iterations,"
"The random walk is performed 2^{exp_times}.")
parser.add_argument("--ins_seg_bg_thres", default=0.25)
parser.add_argument("--sem_seg_bg_thres", default=0.25)
# Output Path
parser.add_argument("--log_name", default="sample_train_eval", type=str)
parser.add_argument("--cam_weights_name", default="sess/res50_cam.pth", type=str)
parser.add_argument("--irn_weights_name", default="sess/res50_irn.pth", type=str)
parser.add_argument("--cam_out_dir", default="result/cam", type=str)
parser.add_argument("--ir_label_out_dir", default="result/ir_label", type=str)
parser.add_argument("--sem_seg_out_dir", default="result/sem_seg", type=str)
parser.add_argument("--ins_seg_out_dir", default="result/ins_seg", type=str)
# Step
parser.add_argument("--train_cam_pass", default=True)
parser.add_argument("--make_cam_pass", default=True)
parser.add_argument("--eval_cam_pass", default=True)
parser.add_argument("--cam_to_ir_label_pass", default=True)
parser.add_argument("--train_irn_pass", default=True)
parser.add_argument("--make_ins_seg_pass", default=True)
parser.add_argument("--eval_ins_seg_pass", default=True)
parser.add_argument("--make_sem_seg_pass", default=True)
parser.add_argument("--eval_sem_seg_pass", default=True)
args = parser.parse_args()
os.makedirs("sess", exist_ok=True)
os.makedirs(args.cam_out_dir, exist_ok=True)
os.makedirs(args.ir_label_out_dir, exist_ok=True)
os.makedirs(args.sem_seg_out_dir, exist_ok=True)
os.makedirs(args.ins_seg_out_dir, exist_ok=True)
pyutils.Logger(args.log_name + '.log')
print(vars(args))
if args.train_cam_pass is True:
import step.train_cam
timer = pyutils.Timer('step.train_cam:')
step.train_cam.run(args)
if args.make_cam_pass is True:
import step.make_cam
timer = pyutils.Timer('step.make_cam:')
step.make_cam.run(args)
if args.eval_cam_pass is True:
import step.eval_cam
timer = pyutils.Timer('step.eval_cam:')
step.eval_cam.run(args)
if args.cam_to_ir_label_pass is True:
import step.cam_to_ir_label
timer = pyutils.Timer('step.cam_to_ir_label:')
step.cam_to_ir_label.run(args)
if args.train_irn_pass is True:
import step.train_irn
timer = pyutils.Timer('step.train_irn:')
step.train_irn.run(args)
if args.make_ins_seg_pass is True:
import step.make_ins_seg_labels
timer = pyutils.Timer('step.make_ins_seg_labels:')
step.make_ins_seg_labels.run(args)
if args.eval_ins_seg_pass is True:
import step.eval_ins_seg
timer = pyutils.Timer('step.eval_ins_seg:')
step.eval_ins_seg.run(args)
if args.make_sem_seg_pass is True:
import step.make_sem_seg_labels
timer = pyutils.Timer('step.make_sem_seg_labels:')
step.make_sem_seg_labels.run(args)
if args.eval_sem_seg_pass is True:
import step.eval_sem_seg
timer = pyutils.Timer('step.eval_sem_seg:')
step.eval_sem_seg.run(args)
I'd be grateful on any suggestions how to deal with that issue. It seems like a simple thing "run this and here you go, the program's ready to work", but I've been struggling with it the whole day with no satisfactory result.
Thanks, Joanna
You are sending the parameters when running the program. This is, when running python run_sample.py. The help from that parameter says that you need to include the Path to VOC 2012 Devkit.
Your call should look something like this:
python run_sample.py --voc12_root 'path/to/your/VOC2012'
My command line arguments that use argparse returns AssertionError. I have two scripts and a directory that stores three data files.
Script:
main.py which does a job of loading data
data.py which does pre-processing for language data
Data:
three tokens are stored in C:\Users\archive path
data.py script
**#data.py**
import os
from io import open
import torch
class Dictionary(object):
def __init__(self):
self.word2idx = {}
self.idx2word = []
def add_word(self, word):
if word not in self.word2idx:
self.idx2word.append(word)
self.word2idx[word] = len(self.idx2word) - 1
return self.word2idx[word]
def __len__(self):
return len(self.idx2word)
class Corpus(object):
def __init__(self, path):
self.dictionary = Dictionary()
self.train = self.tokenize(os.path.join(path, 'train.txt'))
self.valid = self.tokenize(os.path.join(path, 'valid.txt'))
self.test = self.tokenize(os.path.join(path, 'test.txt'))
def tokenize(self, path):
"""Tokenizes a text file."""
assert os.path.exists(path)
# Add words to the dictionary
with open(path, 'r', encoding="utf8") as f:
for line in f:
words = line.split() + ['<eos>']
for word in words:
self.dictionary.add_word(word)
# Tokenize file content
with open(path, 'r', encoding="utf8") as f:
idss = []
for line in f:
words = line.split() + ['<eos>']
ids = []
for word in words:
ids.append(self.dictionary.word2idx[word])
idss.append(torch.tensor(ids).type(torch.int64))
ids = torch.cat(idss)
return ids
main.py script
**#main.py**
import argparse
import time
import math
import os
import torch
import torch.nn as nn
import torch.onnx
import data
parser = argparse.ArgumentParser(description='PyTorch Wikitext-2 RNN/LSTM/GRU/Transformer Language Model')
parser.add_argument('--data', type=str, default='./data/wikitext-2',
help='location of the data corpus')
parser.add_argument('--model', type=str, default='LSTM',
help='type of recurrent net (RNN_TANH, RNN_RELU, LSTM, GRU, Transformer)')
parser.add_argument('--emsize', type=int, default=200,
help='size of word embeddings')
parser.add_argument('--nhid', type=int, default=200,
help='number of hidden units per layer')
parser.add_argument('--nlayers', type=int, default=2,
help='number of layers')
parser.add_argument('--lr', type=float, default=20,
help='initial learning rate')
parser.add_argument('--clip', type=float, default=0.25,
help='gradient clipping')
parser.add_argument('--epochs', type=int, default=40,
help='upper epoch limit')
parser.add_argument('--batch_size', type=int, default=20, metavar='N',
help='batch size')
parser.add_argument('--bptt', type=int, default=35,
help='sequence length')
parser.add_argument('--dropout', type=float, default=0.2,
help='dropout applied to layers (0 = no dropout)')
parser.add_argument('--tied', action='store_true',
help='tie the word embedding and softmax weights')
parser.add_argument('--seed', type=int, default=1111,
help='random seed')
parser.add_argument('--cuda', action='store_true',
help='use CUDA')
parser.add_argument('--log-interval', type=int, default=200, metavar='N',
help='report interval')
parser.add_argument('--save', type=str, default='model.pt',
help='path to save the final model')
parser.add_argument('--onnx-export', type=str, default='',
help='path to export the final model in onnx format')
parser.add_argument('--nhead', type=int, default=2,
help='the number of heads in the encoder/decoder of the transformer model')
parser.add_argument('--dry-run', action='store_true',
help='verify the code and the model')
args = parser.parse_args()
# Set the random seed manually for reproducibility.
torch.manual_seed(args.seed)
if torch.cuda.is_available():
if not args.cuda:
print("WARNING: You have a CUDA device, so you should probably run with --cuda")
device = torch.device("cuda" if args.cuda else "cpu")
###############################################################################
# Load data
###############################################################################
corpus = data.Corpus(args.data)
When I run main.py in terminal:
$ python main.py
The assertionError appears as below:
File "main.py", line 67, in <module>
corpus = data.Corpus(args.data)
File "C:\Users\archive\data.py", line 22, in __init__
self.train = self.tokenize(os.path.join(path, 'train.txt'))
File "C:\Users\archive\data.py", line 28, in tokenize
assert os.path.exists(path)
AssertionError
I use imageai to crop my car dataset for cnn training. My code is quite simple, I have two GPUs and accoding to tensorflow logs and nvidia-smi output, it uses both of my GPUs, but at the same time it doesn't use all available memory
import os
import sys, getopt
# os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from imageai.Detection import ObjectDetection
from time import perf_counter
import shutil
import numpy as np
from io import BytesIO
from PIL import Image
import argparse
import time
from shutil import copyfile
parser = argparse.ArgumentParser(description='Car Images Filter')
parser.add_argument("-i", "--input_folder", action="store", dest="input_folder", help="Folder for filtering", type=str, required=True)
parser.add_argument("-o", "--output_folder", action="store", dest="output_folder", help="Folder for the results of wrong images", type=str, required=False)
parser.add_argument("-start_from_model", "--start_from_model", action="store", dest="start_from_model", help="Start model for exception cases", type=str, default='')
parser.add_argument("-start_from_mark", "--start_from_mark", action="store", dest="start_from_mark", help="Start mark for exception cases", type=str, default='')
parser.add_argument("-start_from_file", "--start_from_file", action="store", dest="start_from_file", help="Start file for exception cases", type=str, default='')
parser.add_argument("-remove", "--remove", action="store_true", dest="remove", help="Folder for the results of wrong images", required=False)
args = parser.parse_args()
execution_path = os.getcwd()
detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath(os.path.join(execution_path , "resnet50_coco_best_v2.0.1.h5"))
detector.loadModel()
directories = os.listdir(args.input_folder)
if args.start_from_mark:
directories = directories[directories.index(args.start_from_mark):]
for mark in directories:
models = os.listdir(os.path.join(args.input_folder, mark))
for model in models:
files = os.listdir(os.path.join(args.input_folder, mark, model))
if model==args.start_from_model and args.start_from_file:
files=files[files.index(args.start_from_file):]
for file in files:
if(file.endswith('.json')):
continue
image = Image.open(os.path.join(args.input_folder, mark, model, file))
detected_image_array, detections = detector.detectCustomObjectsFromImage(custom_objects=detector.CustomObjects(car=True,truck=True), input_image=os.path.join(args.input_folder, mark, model, file), output_type="array")
detected_image = Image.fromarray(detected_image_array, 'RGB')
if len(detections) == 0:
copyfile(os.path.join(args.input_folder, mark, model, file), os.path.join(args.output_folder, mark, model, file))
print(mark + " " + model + " " + file)
And here is my nvidia-smi output:
Image link
How can I increase GPU memory usage to make it faster?
New to python,
My professor has given me a piece of code to help process some imagery, however it only works one image at a time due to an input and output needing to be stipulated each time. Usually I would put import os or glob but argparse is something new to me and my usual methods do not work.
I need to edit this in order to create a list of '.hdf' files with the output being the same as the input just with a name change of '_Processed.hdf'
Code below:
# Import the numpy library
import numpy
# Import the GDAL library
from osgeo import gdal
# Import the GDAL/OGR spatial reference library
from osgeo import osr
# Import the HDF4 reader.
import pyhdf.SD
# Import the system library
import sys
# Import the python Argument parser
import argparse
import pprint
import rsgislib
def creatGCPs(lat_arr, lon_arr):
y_size = lat_arr.shape[0]
x_size = lat_arr.shape[1]
print(x_size)
print(y_size)
gcps = []
for y in range(y_size):
for x in range(x_size):
gcps.append([x, y, lon_arr[y,x], lat_arr[y,x]])
return gcps
def run(inputFile, outputFile):
hdfImg = pyhdf.SD.SD(inputFile)
#print("Available Datasets")
pprint.pprint(hdfImg.datasets())
#print("Get Header Attributes")
#attr = hdfImg.attributes(full=1)
#pprint.pprint(attr)
rsgisUtils = rsgislib.RSGISPyUtils()
wktStr = rsgisUtils.getWKTFromEPSGCode(4326)
#print(wktStr)
lat_arr = hdfImg.select('Latitude')[:]
long_arr = hdfImg.select('Longitude')[:]
sel_dataset_arr = hdfImg.select('Optical_Depth_Land_And_Ocean')[:]
gcplst = creatGCPs(lat_arr, long_arr)
y_size = lat_arr.shape[0]
x_size = lat_arr.shape[1]
min_lat = numpy.min(lat_arr)
max_lat = numpy.max(lat_arr)
min_lon = numpy.min(long_arr)
max_lon = numpy.max(long_arr)
lat_res = (max_lat-min_lat)/float(y_size)
lon_res = (max_lon-min_lon)/float(x_size)
driver = gdal.GetDriverByName( "KEA" )
metadata = driver.GetMetadata()
dst_ds = driver.Create( outputFile, x_size, y_size, 1, gdal.GDT_Float32 )
dst_ds.GetRasterBand(1).WriteArray(sel_dataset_arr)
gcp_list = []
for gcp_arr in gcplst:
gcp = gdal.GCP(int(gcp_arr[2]), int(gcp_arr[3]), int(0), gcp_arr[0], gcp_arr[1])
gcp_list.append(gcp)
dst_ds.SetGCPs(gcp_list, wktStr)
dst_ds = None
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Define the argument for specifying the input file.
parser.add_argument("-i", "--input", type=str, required=True, help="Specify the input image file.")
# Define the argument for specifying the output file.
parser.add_argument("-o", "--output", type=str, required=True, help="Specify the output image file.")
args = parser.parse_args()
run(args.input, args.output)
From the argparse docs here, you can simply add a nargs='*' to the argument definitions. However, be sure to give the input and output files in the same order...
Also, you can use the pathlib.Path object, which is now standard in Python >=3.4, to play with file names.
So with an added from pathlib import Path at the top, the last part of your code becomes:
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Define the argument for specifying the input file.
parser.add_argument("-i", "--input", nargs='*', type=str, required=True, help="Specify the input image file.")
args = parser.parse_args()
for input in args.input:
output = Path(input).stem + '_Processed.hdf'
run(input, output)
Here, args.input is now a list of strings, so we iterate on it. The .stem attribute returns the file name without any extensions, I find it cleaner than something like input[:-4], which only works for specific extension lengths...
This works well with glob patterns in a standard linux shell (I don't know for other cases).
Ex. calling python this_script.py -i Image_*, processes every file with filenames beginning with "Image_".
You can use the nargs='+' option, and since you're going to have only have one required argument, I'd recommend that you don't use --input as an option, but simply run the script as script_name.py input_file1 input_file2 input_file3 ...:
import os.path
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('input', nargs='+', help="Specify the input image file.")
args = parser.parse_args()
for filename in args.input:
root, ext = os.path.splitext(filename)
run(filename, ''.join((root, '_Processed', ext)))