Related
I've written a bunch of Python code for tagging our vendor files in Solidworks PDM and I'm trying to use the Solidworks PDM API to actually apply that information. Officially the API only supports C# and VB, but I'd like to keep everything in Python if possible, because everything else is already in Python (and it's the language I'm most comfortable programming with). Here's a high level list of what I'm trying to accomplish:
Check out a bunch of files
Update a data card variable
Check those files back in
The API defines two ways main ways to check in/check out/update variables in individual files--one for individual files and one for groups of files. You can use methods accessible through the IEdmVault5 interface to perform all 3 operations on individual files, and to perform these operations on groups of files you have to use 3 separate interfaces--IEdmBatchGet (checkout), IEdmBatchUpdate2 (update variables), and IEdmBatchUnlock (check in).
I was able write functional code that does all 3 things for each individual file, but it was slow when operating on many files--my goal is to update a couple thousand files at once. Getting the batch interfaces to work proved much trickier, but I was able to eventually get batch checkout and checkin working (and it was definitely worth it--each operation was about 10X faster using the vault interface). However, I'm gotten pretty stuck trying to make variable updating work. Here's my code for updating variables:
import win32com.client
import os
import comtypes.client as cc
cc.GetModule('C:\Program Files (x86)\SOLIDWORKS PDM\EdmInterface.dll')
import comtypes.gen._5FA2C692_8393_4F31_9BDB_05E6F807D0D3_0_5_22 as pdm_lib2
vault_name = 'vault_name'
folder_path = 'some_folder_path'
def connect_to_vault(vault_name, lib = 'comtypes'):
if lib == 'comtypes':
vault = cc.CreateObject('ConisioLib.EdmVault.1')
vault.LoginAuto(vault_name, 0)
else:
vault = win32com.client.dynamic.Dispatch('ConisioLib.EdmVault.1')
vault.LoginAuto(vault_name, 0)
return vault
def getrefs(vault, filenames, folder_path):
DocIDs = []
ProjIDs = []
for filename in filenames:
temp_ProjID = vault.GetFolderFromPath(folder_path)
temp_DocID = vault.GetFileFromPath(filename, temp_ProjID)[0] #this fails when I use a comtypes generated vault
DocIDs.append(temp_DocID.ID)
ProjIDs.append(temp_ProjID.ID)
print('Document and Project IDs pulled')
return DocIDs, ProjIDs
vault = connect_to_vault(vault_name)
ref_vault =connect_to_vault(vault_name, lib = 'win32com')
filenames = [folder_path + s for s in os.listdir(folder_path)]
DocIDs, ProjIDs = getrefs(ref_vault, filenames, folder_path)
#Using Comtypes to update files
VarIDs = [54] * len(DocIDs) #Updating description only
var_values = [['foo' + str(s)] for s in range(len(DocIDs))] #dummy values for now
update_vars = vault.CreateUtility(2) #create instance of BatchUpdate
for i, file in enumerate(DocIDs):
update_vars.SetVar(file, VarIDs[i], var_values[i], '', 1)
pdm_error = [pdm_lib2.EdmBatchError2()] * len(DocIDs)
update_vars.CommitUpdate([pdm_error])
When I call update_vars.CommitUpdate([pdm_error]), I get the following error:
ArgumentError: argument 2: <class 'AttributeError'>: 'list' object has no attribute 'QueryInterface'
I'm not sure why this method is expecting an object with a 'QueryInterface' attribute--I'm only passing it a list of structs, not a full COM object like my file vault. I also tried using win32com to execute the method:
update_vars = ref_vault.CreateUtility(2) #create instance of BatchUpdate, use win32com instead
for i, file in enumerate(DocIDs):
update_vars.SetVar(file, VarIDs[i], var_values[i], '', 1)
pdm_error = [pdm_lib2.EdmBatchError2()] * len(DocIDs)
update_vars.CommitUpdate([pdm_error])
And now I get this error:
Traceback (most recent call last):
File "<ipython-input-222-0c49fb0861b9>", line 7, in <module>
update_vars.CommitUpdate([pdm_error])
File "D:\Users\apreacher\Documents\Shared Files\Python\Webscraping_projects\Helper Modules\pdm_lib.py", line 1500, in CommitUpdate
, poCallback)
File "C:\Users\apreacher\AppData\Local\Continuum\anaconda3\lib\site-packages\win32com\client\__init__.py", line 467, in _ApplyTypes_
self._oleobj_.InvokeTypes(dispid, 0, wFlags, retType, argTypes, *args),
MemoryError: CreatingSafeArray
And this is where I'm stuck. I haven't been able to make any headway on getting the CommitUpdate method to work properly. I also have the method definitions from the files generated by makepy.py and comtypes, but I don't really know how to interpret them:
makepy.py method definition:
def CommitUpdate(self, ppoRetErrors=pythoncom.Missing, poCallback=0):
'method Commit'
return self._ApplyTypes_(3, 1, (3, 0), ((24612, 2), (9, 49)), 'CommitUpdate', None,ppoRetErrors
, poCallback)
comtypes generated file:
COMMETHOD([dispid(3), helpstring('method Commit')], HRESULT, 'CommitUpdate',
( ['out'], POINTER(_midlSAFEARRAY(EdmBatchError2)), 'ppoRetErrors' ),
( ['in', 'optional'], POINTER(IEdmCallback), 'poCallback', 0 ),
( ['out', 'retval'], POINTER(c_int), 'plErrorCount' )),
Any ideas?
What version of PDM are you using? (I am on Pro 2019 sp4.)
I just noticed an inconsistency in your code. vault.CreateUtility(2) would (in .NET) return an object of type IEdmBatchUpdate. 4 lines later, you are calling the method CommitUpdate which only exists for the newer API IEdmBatchUpdate2 (see https://help.solidworks.com/2020/english/api/epdmapi/EPDM.Interop.epdm~EPDM.Interop.epdm.IEdmBatchUpdate2.html ).
I can tell you for certain under C#, this would require casting the result of CreatUtility to the proper object type.
It seems you are looking for a way to automate entry of data into file datacards.
Have you tried using data import rules? https://help.solidworks.com/2020/English/EnterprisePDM/Admin/c_Working_With_Variable_Values_overview.htm
I can't seem to find a simple answer to the question. I have this successfully working in Libreoffice Basic:
NamedRange = ThisComponent.NamedRanges.getByName("transactions_detail")
RefCells = NamedRange.getReferredCells()
Set MainRange = RefCells.getDataArray()
Then I iterate over MainRange and pull out the rows I am interested in.
Can I do something similar in a python macro? Can I assign a 2d named range to a python variable or do I have to iterate over the range to assign the individual cells?
I am new to python but hope to convert my iteration intensive macro function to python in hopes of making it faster.
Any help would be much appreciated.
Thanks.
LibreOffice can be manipulated from Python with the library pyuno. The documentation of pyuno is unfortunately incomplete but going through this tutorial may help.
To get started:
Python-Uno, the library to communicate via Uno, is already in the LibreOffice Python’s path. To initialize your context, type the following lines in your python shell :
import socket # only needed on win32-OOo3.0.0
import uno
# get the uno component context from the PyUNO runtime
localContext = uno.getComponentContext()
# create the UnoUrlResolver
resolver = localContext.ServiceManager.createInstanceWithContext(
"com.sun.star.bridge.UnoUrlResolver", localContext )
# connect to the running office
ctx = resolver.resolve( "uno:socket,host=localhost,port=2002;urp;StarOffice.ComponentContext" )
smgr = ctx.ServiceManager
# get the central desktop object
desktop = smgr.createInstanceWithContext( "com.sun.star.frame.Desktop",ctx)
# access the current writer document
model = desktop.getCurrentComponent()
Then to get a named range and access the data as an array, you can use the following methods:
NamedRange = model.NamedRanges.getByName(“Test Name”)
MainRange = NamedRange.getDataArray()
However I am unsure that this will result in a noticeable preformance gain.
I am trying to complete some homework in a DeepLearning.ai course assignment.
When I try the assignment in Coursera platform everything works fine, however, when I try to do the same imports on my local machine it gives me an error,
ModuleNotFoundError: No module named 'lr_utils'
I have tried resolving the issue by installing lr_utils but to no avail.
There is no mention of this module online, and now I started to wonder if that's a proprietary to deeplearning.ai?
Or can we can resolve this issue in any other way?
You will be able to find the lr_utils.py and all the other .py files (and thus the code inside them) required by the assignments:
Go to the first assignment (ie. Python Basics with numpy) - which you can always access whether you are a paid user or not
And then click on 'Open' button in the Menu bar above. (see the image below)
.
Then you can include the code of the modules directly in your code.
As per the answer above, lr_utils is a part of the deep learning course and is a utility to download the data sets. It should readily work with the paid version of the course but in case you 'lost' access to it, I noticed this github project has the lr_utils.py as well as some data sets
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments
Note:
The chinese website links did not work when I looked at them. Maybe the server storing the files expired. I did see that this github project had some datasets though as well as the lr_utils file.
EDIT: The link no longer seems to work. Maybe this one will do?
https://github.com/knazeri/coursera/blob/master/deep-learning/1-neural-networks-and-deep-learning/2-logistic-regression-as-a-neural-network/lr_utils.py
Download the datasets from the answer above.
And use this code (It's better than the above since it closes the files after usage):
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
"lr_utils" is not official library or something like that.
Purpose of "lr_utils" is to fetch the dataset that is required for course.
option (didn't work for me): go to this page and there is a python code for downloading dataset and creating "lr_utils"
I had a problem with fetching data from provided url (but at least you can try to run it, maybe it will work)
option (worked for me): in the comments (at the same page 1) there are links for manually downloading dataset and "lr_utils.py", so here they are:
link for dataset download
link for lr_utils.py script download
Remember to extract dataset when you download it and you have to put dataset folder and "lr_utils.py" in the same folder as your python script that is using it (script with this line "import lr_utils").
The way I fixed this problem was by:
clicking File -> Open -> You will see the lr_utils.py file ( it does not matter whether you have paid/free version of the course).
opening the lr_utils.py file in Jupyter Notebooks and clicking File -> Download ( store it in your own folder ), rerun importing the modules. It will work like magic.
I did the same process for the datasets folder.
You can download train and test dataset directly here: https://github.com/berkayalan/Deep-Learning/tree/master/datasets
And you need to add this code to the beginning:
import numpy as np
import h5py
import os
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
I faced similar problem and I had followed the following steps:
1. import the following library
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
2. download the train_catvnoncat.h5 and test_catvnoncat.h5 from any of the below link:
[https://github.com/berkayalan/Neural-Networks-and-Deep-Learning/tree/master/datasets]
or
[https://github.com/JudasDie/deeplearning.ai/tree/master/Improving%20Deep%20Neural%20Networks/Week1/Regularization/datasets]
3. create a folder named datasets and paste these two files in this folder.
[ Note: datasets folder and your source code file should be in same directory]
4. run the following code
def load_dataset():
with h5py.File('datasets1/train_catvnoncat.h5', "r") as train_dataset:
train_set_x_orig = np.array(train_dataset["train_set_x"][:])
train_set_y_orig = np.array(train_dataset["train_set_y"][:])
with h5py.File('datasets1/test_catvnoncat.h5', "r") as test_dataset:
test_set_x_orig = np.array(test_dataset["test_set_x"][:])
test_set_y_orig = np.array(test_dataset["test_set_y"][:])
classes = np.array(test_dataset["list_classes"][:])
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
5. Load the data:
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
check datasets
print(len(train_set_x_orig))
print(len(test_set_x_orig))
your data set is ready, you may check the len of the train_set_x_orig, train_set_y variable. For mine, it was 209 and 50
I could download the dataset directly from coursera page.
Once you open the Coursera notebook you go to File -> Open and the following window will be display:
enter image description here
Here the notebooks and datasets are displayed, you can go to the datasets folder and download the required data for the assignment. The package lr_utils.py is also available for downloading.
below is your code, just save your file named "lr_utils.py" and now you can use it.
import numpy as np
import h5py
def load_dataset():
train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
test_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
classes = np.array(test_dataset["list_classes"][:]) # the list of classes
train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
if your code file can not find you newly created lr_utils.py file just write this code:
import sys
sys.path.append("full path of the directory where you saved Ir_utils.py file")
Here is the way to get dataset from as #ThinkBonobo:
https://github.com/andersy005/deep-learning-specialization-coursera/tree/master/01-Neural-Networks-and-Deep-Learning/week2/Programming-Assignments/datasets
write a lr_utils.py file, as above answer #StationaryTraveller, put it into any of sys.path() directory.
def load_dataset():
with h5py.File('datasets/train_catvnoncat.h5', "r") as train_dataset:
....
!!! BUT make sure that you delete 'datasets/', cuz now the name of your data file is train_catvnoncat.h5
restart kernel and good luck.
I may add to the answers that you can save the file with lr_utils script on the disc and import that as a module using importlib util function in the following way.
The below code came from the general thread about import functions from external files into the current user session:
How to import a module given the full path?
### Source load_dataset() function from a file
# Specify a name (I think it can be whatever) and path to the lr_utils.py script locally on your PC:
util_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_utils.py")
# Make a module
load_utils = importlib.util.module_from_spec(util_script)
# Execute it on the fly
util_script.loader.exec_module(load_utils)
# Load your function
load_utils.load_dataset()
# Then you can use your load_dataset() coming from above specified 'module' called load_utils
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_utils.load_dataset()
# This could be a general way of calling different user specified modules so I did the same for the rest of the neural network function and put them into separate file to keep my script clean.
# Just remember that Python treat it like a module so you need to prefix the function name with a 'module' name eg.:
# d = nnet_utils.model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1000, learning_rate = 0.005, print_cost = True)
nnet_script = importlib.util.spec_from_file_location("utils function", "D:/analytics/Deep_Learning_AI/functions/lr_nnet.py")
nnet_utils = importlib.util.module_from_spec(nnet_script)
nnet_script.loader.exec_module(nnet_utils)
That was the most convenient way for me to source functions/methods from different files in Python so far.
I am coming from the R background where you can call just one line function source() to bring external scripts contents into your current session.
The above answers didn't help, some links had expired.
So, lr_utils is not a pip library but a file in the same notebook as the CourseEra website.
You can click on "Open", and it'll open the explorer where you can download everything that you would want to run in another environment.
(I used this on a browser.)
This is how i solved mine, i copied the lir_utils file and paste it in my notebook thereafter i downloaded the dataset by zipping the file and extracting it. With the following code. Note: Run the code on coursera notebook and select only the zipped file in the directory to download.
!pip install zipfile36
zf = zipfile.ZipFile('datasets/train_catvnoncat_h5.zip', mode='w')
try:
zf.write('datasets/train_catvnoncat.h5')
zf.write('datasets/test_catvnoncat.h5')
finally:
zf.close()
I'm attempting to generate Python bindings for a Thrift service definition using Bazel. As far as I've been able to tell, there is no existing .bzl for doing this so I'm somewhat on my own here. I've written .bzl rules in the past but the situation I'm running into in this case is different.
The general issue is that I don't know the names of the output files from the thrift command before the build starts which means that I can't generate a py_library rule with a srcs attribute set correctly since I don't have the names of the files. I've tried to follow examples whereby the output files are known ahead of time by way of generating a .zip file, but the py_library rule only allows .py files as srcs so this doesn't work.
The only thing I can think of would be to use a repository_rule to generate the code and BUILD files but what I'm trying to accomplish doesn't seem like much of a stretch and should be supported.
Someone has attempted this before.
Discussion here:
https://groups.google.com/forum/#!topic/bazel-dev/g3DVmhVhNZs
Code Here:
https://github.com/wt/bazel_thrift
I would start there.
Edit:
I started there. I did not get as far as I hoped. Bazel is being extended to support having multiple outputs generated by one input, but it does not allow that very easily just yet, per:
groups.google.com/forum/#!topic/bazel-discuss/3WQhHm194yU
Regardless, I did attempt something for C++ thrift bindings, which have the same issue. The Java example got around this by using the source jar as a build source, which won't work for us. To make it work, I passed in the list of source files I cared about that would be created by the thrift generator. I then reported these files as the output that would be generated in the impl. That seems to work. It is a bit nasty in that you have to know what files you are looking for before you build, but it does work. It would also be possible to have a small program read the thift file and determine the output files it would make. That would be nicer, but I don't have the time. Plus, the current approach is nice, in that it explicitly defines what files you are looking for thrift to generate, which makes the BUILD file a little bit easier to understand for a newbie like me.
First pass at some code, maybe I will clean it up and submit it as a patch (maybe not):
###########
# CPP gen
###########
# Create Generated cpp source files from thrift idl files.
#
def _gen_thrift_cc_src_impl(ctx):
out = ctx.outputs.outs
if not out:
# empty set
# nothing to do, no inputs to build
return DefaultInfo(files=depset(out))
# Used dir(out[0]) to see what
# we had available in the object.
# dirname attribute tells us the directory
# we should be putting stuff in, works nicely.
# ctx.genfile_dir is not the output directory
# when called as an external repository
target_genfiles_root = out[0].dirname
thrift_includes_root = "/".join(
[ target_genfiles_root, "thrift_includes"])
gen_cpp_dir = "/".join([target_genfiles_root,"." ])
commands = []
commands.append(_mkdir_command_string(target_genfiles_root))
commands.append(_mkdir_command_string(thrift_includes_root))
thrift_lib_archive_files = ctx.attr.thrift_library._transitive_archive_files
for f in thrift_lib_archive_files:
commands.append(
_tar_extract_command_string(f.path, thrift_includes_root))
commands.append(_mkdir_command_string(gen_cpp_dir))
thrift_lib_srcs = ctx.attr.thrift_library.srcs
for src in thrift_lib_srcs:
commands.append(_thrift_cc_compile_command_string(
thrift_includes_root, gen_cpp_dir, src))
inputs = (
list(thrift_lib_archive_files) + thrift_lib_srcs )
ctx.action(
inputs = inputs,
outputs = out,
progress_message = "Generating CPP sources from thift archive %s" % target_genfiles_root,
command = " && ".join(commands),
)
return DefaultInfo(files=depset(out))
thrift_cc_gen_src= rule(
_gen_thrift_cc_src_impl,
attrs = {
"thrift_library": attr.label(
mandatory=True, providers=['srcs', '_transitive_archive_files']),
"outs" : attr.output_list(mandatory=True, non_empty=True),
},output_to_genfiles = True,
)
# wraps cc_library to generate a library from one or more .thrift files
# provided as a thrift_library bundle.
#
# Generates all src and hdr files needed, but you must specify the expected
# files. This is a bug in bazel: https://groups.google.com/forum/#!topic/bazel-discuss/3WQhHm194yU
#
# Instead of src and hdrs, requires: cpp_srcs and cpp_hdrs. These are required.
#
# Takes:
# name: The library name, like cc_library
#
# thrift_library: The library of source .thrift files from which our
# code will be built from.
#
# cpp_srcs: The expected source that will be generated and built. Passed to
# cc_library as src.
#
# cpp_hdrs: The expected header files that will be generated. Passed to
# cc_library as hdrs.
#
# Rest of options are documented in native.cc_library
#
def thrift_cc_library(name, thrift_library,
cpp_srcs=[],cpp_hdrs=[],
build_skeletons=False,
deps=[], alwayslink=0, copts=[],
defines=[], include_prefix=None,
includes=[], linkopts=[],
linkstatic=0, nocopts=None,
strip_include_prefix=None,
textual_hdrs=[],
visibility=None):
# from our thrift_library tarball source bundle,
# create a generated cpp source directory.
outs = []
for src in cpp_srcs:
outs.append("//:"+src)
for hdr in cpp_hdrs:
outs.append("//:"+hdr)
thrift_cc_gen_src(
name = name + 'cc_gen_src',
thrift_library = thrift_library,
outs = outs,
)
# Then make the library for the given name.
native.cc_library(
name = name,
deps = deps,
srcs = cpp_srcs,
hdrs = cpp_hdrs,
alwayslink = alwayslink,
copts = copts,
defines=defines,
include_prefix=include_prefix,
includes=includes,
linkopts=linkopts,
linkstatic=linkstatic,
nocopts=nocopts,
strip_include_prefix=strip_include_prefix,
textual_hdrs=textual_hdrs,
visibility=visibility,
)
I want to try to add all the step details - Expected, Actual, Status,
etc. to a QC Run for a testcase of a TestSet from a Python Script
living outside the Quality Center.
I have come till here (code given below) and I don't know how to add
Step Expected and Step Actual Result. If anyone knows how do it,
please help me out!! Please, I don't want any QTP solutions.
Thanks,
Code-
# Script name - add_tsrun.py
# C:\Python27\python.exe
# This script lives locally on a Windows machine that has - Python 2.7, Win32 installed, IE8
# Dependencies on Windows Machine - Python 2.7, PythonWin32 installed, IE8, a QC Account, connectivity to QCServer
import win32com.client, os
tdc = win32com.client.Dispatch("TDApiOle80.TDConnection")
tdc.InitConnection('http://QCSERVER:8080/qcbin')
tdc.Login('USERNAME', 'PASSWORD')
tdc.Connect('DOMAIN_NAME', 'PROJECT')
tsFolder = tdc.TestSetTreeManager.NodeByPath('Root\\test_me\\sub_folder')
tsList = tsFolder.FindTestSets('testset1')
ts_object = tsList.Item(1)
ts_dir = os.path.dirname('testset1')
ts_name = os.path.basename('testset1')
tsFolder = tdc.TestSetTreeManager.NodeByPath(ts_dir)
tsList = tsFolder.FindTestSets(ts_name)
ts_object = tsList.Item(1)
TSTestFact = ts_object.TSTestFactory
TestSetTestsList = TSTestFact.NewList("")
ts_instance = TestSetTestsList.Item(1)
newItem = ts_instance.RunFactory.AddItem(None) # newItem == Run Object
newItem.Status = 'No Run'
newItem.Name = 'Run 03'
newItem.Post()
newItem.CopyDesignSteps() # Copy Design Steps
newItem.Post()
steps = newItem.StepFactory.NewList("")
step1 = steps[0]
step1.Status = "Not Completed"
step1.post()
## How do I change the Actual Result??
## I can access the Actual, Expected Result by doing this, but not change it
step1.Field('ST_ACTUAL') = 'My actual result' # This works in VB, not python as its a Syntax error!!
Traceback ( File "<interactive input>", line 1
SyntaxError: can't assign to function call
Hope this helps you guys out there. If you know the answer to set the
Actual Result, please help me out and let me know. Thanks,
Amit
As Ethan Furman answered in your previous question:
In Python () represent calls to functions, while [] represent indexing and mapping.
So in other words, you probably want to do step1.Field['ST_ACTUAL'] = 'My actual result'
Found the answer after a lot of Google Search :)
Simple -> Just do this:
step1.SetField("ST_ACTUAL", "my actual result") # Wohhooooo!!!!
If the above code fails to work, try to do the following:-
(OPTIONAL) Set your win32 com as follows- (Making ''Late Binding'')
# http://oreilly.com/catalog/pythonwin32/chapter/ch12.html
a. Start PythonWin, and from the Tools menu, select the item COM Makepy utility.
b. Using Windows Explorer, locate the client subdirectory (OTA COM Type Library)
under the main win32com directory and double-click the file makepy.py.
Thank you all...