I've been trying to create content in my plone site using the Dexterity tool createContentInContainer.
I wrote a script that runs under my zopepy instance, and it accomplishes the following:
Selects data from a SQL table.
Creates a list of tuples that mirrors the custom content type defined in my product.
I know I'm extremely naive in my approach, but I've created a connection to the applications database by:
storage = FileStorage.FileStorage('.../var/filestorage/Data.fs')
db = DB(storage)
conn = db.open()
dbroot = conn.root()
I'm trying to create content by:
createContentInContainer(dbroot['Application']['myapp']['existingfolder'], portal_type, checkConstraints=False, content=item)
portal_type is previously set to my custom content type. item has been both the list of tuples passed to the content's interface (which throws a Could not adapt TypeError) as well as an unregistered adapter that inherits from the interface.
The interface for the type is registered in mysite.Widget.xml in profiles/defualt/types, but the script keeps throwing:
Traceback (most recent call last):
File "./bin/zopepy", line 345, in <module>
execfile(__file__)
File "importdex.py", line 105, in <module>
createContentInContainer(dbroot['Application']['myapp']['existingfolder'], portal_type, checkConstraints=False, content=item)
File "env/mysite/eggs/plone.dexterity-1.0-py2.7.egg/plone/dexterity/utils.py", line 149, in createContentInContainer
content = createContent(portal_type, **kw)
File "env/mysite/eggs/plone.dexterity-1.0-py2.7.egg/plone/dexterity/utils.py", line 105, in createContent
fti = getUtility(IDexterityFTI, name=portal_type)
File "env/mysite/eggs/zope.component-3.9.5-py2.7.egg/zope/component/_api.py", line 169, in getUtility
raise ComponentLookupError(interface, name)
zope.component.interfaces.ComponentLookupError: (<InterfaceClass plone.dexterity.interfaces.IDexterityFTI>, 'mysite.Widget')
As I've mentioned, I know I'm extremely naive in my approach, and I probably deserve a slap on the hand. I apologize if I've presented my question in a confusing manner.
My questions are these:
Can I instantiate createContentInContainer from zopepy? Is my rigged connection enough or does the script need to be run within the application to inherit stuff that Dexterity/FTI needs to accomplish what I'm asking?
Do I need an adapter? The one i have inherits from grok.Adapter and passes the interface to grok.provides and grok.context, but should it declare properties based on the entirety of the content schema?
The list of tuples is arbitrary. It just seemed like the thing to do given the structure of the ZODB. If I declare the schema of the content type as properties in a registered adapter, the data should be crafted to conform to attributes of an object (the adapter), right?
You need to set up a little more context for your code to work. The Plone site acts as a local component registry, for example.
You are also better off using the bin/instance run [scriptname] command, it'll set up the database connection for you and pass the root object as app to your script. In that script, use the following boilerplate to get the rest of the scaffolding up:
import transaction
from zope.app.component.hooks import setSite
from Testing.makerequest import makerequest
from AccessControl.SecurityManagement import newSecurityManager
plone_site_id = 'Plone' # Adjust as needed.
app = makerequest(app)
site = app[plone_site_id]
setSite(site)
user = app.acl_users.getUser('admin').__of__(site.acl_users)
newSecurityManager(None, user)
With these in place you'll have everything you need to run your code. Don't forget to call transaction.commit() at the end. Your Plone site is reachable in the local variable site.
Related
I read that Unit tests run fast. If they don’t run fast, they aren’t unit tests. A test is not a unit test if 1. It talks to a database. 2. It communicates across a network. 3. It touches the file system. 4. You have to do special things to your environment (such as editing configuration files) to run it. in Working Effectively with legacy code (book).
I have a function that is downloading the zip from the internet and then converting it into a python object for a particular class.
import typing as t
def get_book_objects(date: str) -> t.List[Book]:
# download the zip with the date from the endpoint
res = requests.get(f"HTTP-URL-{date}")
# code to read the response content in BytesIO and then use the ZipFile module
# to extract data.
# parse the data and return a list of Book object
return books
let's say I want to write a unit test for the function get_book_objects. Then how am I supposed to write a unit test without making a network request? I mean I prefer file system read-over a network request because it will be way faster than making a request to the network although it is written that a good unit test also not touches the file system I will be fine with that.
So even if I want to write a unit test where I can provide a local zip file I have to modify the existing function to open the file from the local file system or I have to add some additional parameter to the function so I can send a zip file path from unit test function.
What will you do to write a good unit test in this kind of situation?
What will you do to write a good unit test in this kind of situation?
In the TDD world, the usual answer would be to delegate the work to a more easily tested component.
Consider:
def get_book_objects(date: str) -> t.List[Book]:
# This is the piece that makes get_book_objects hard
# to isolate
http_get = requests.get
# download the zip with the date from the endpoint
res = http_get(f"HTTP-URL-{date}")
# code to read the response content in BytesIO and then use the ZipFile module
# to extract data.
# parse the data and return a list of Book object
return books
which might then become something like
def get_book_objects(date: str) -> t.List[Book]:
# This is the piece that makes get_book_objects hard
# to isolate
http_get = requests.get
return get_book_objects_v2(http_get, date)
def get_book_objects_v2(http_get, date: str) -> t.List[Book]
# download the zip with the date from the endpoint
res = http_get(f"HTTP-URL-{date}")
# code to read the response content in BytesIO and then use the ZipFile module
# to extract data.
# parse the data and return a list of Book object
return books
get_book_objects is still hard to test, but it is also "so simple that there are obviously no deficiencies". On the other hand, get_book_objects_v2 is easy to test, because your test can control what callable is passed to the subject, and can use any reasonable substitute you like.
What we've done is shift most of the complexity/risk into a "unit" that is easier to test. For the function that is still hard to test, we'll use other techniques.
When authors talk about tests "driving" the design, this is one example - we're treating "complicated code needs to be easy to test" as a constraint on our design.
You've already identified the correct reference (Working Effectively with Legacy Code). The material you want is the discussion of seams.
A seam is a place where you can alter behavior in your program without editing in that place.
(In my edition of the book, the discussion begins in Chapter 4).
I've written a bunch of Python code for tagging our vendor files in Solidworks PDM and I'm trying to use the Solidworks PDM API to actually apply that information. Officially the API only supports C# and VB, but I'd like to keep everything in Python if possible, because everything else is already in Python (and it's the language I'm most comfortable programming with). Here's a high level list of what I'm trying to accomplish:
Check out a bunch of files
Update a data card variable
Check those files back in
The API defines two ways main ways to check in/check out/update variables in individual files--one for individual files and one for groups of files. You can use methods accessible through the IEdmVault5 interface to perform all 3 operations on individual files, and to perform these operations on groups of files you have to use 3 separate interfaces--IEdmBatchGet (checkout), IEdmBatchUpdate2 (update variables), and IEdmBatchUnlock (check in).
I was able write functional code that does all 3 things for each individual file, but it was slow when operating on many files--my goal is to update a couple thousand files at once. Getting the batch interfaces to work proved much trickier, but I was able to eventually get batch checkout and checkin working (and it was definitely worth it--each operation was about 10X faster using the vault interface). However, I'm gotten pretty stuck trying to make variable updating work. Here's my code for updating variables:
import win32com.client
import os
import comtypes.client as cc
cc.GetModule('C:\Program Files (x86)\SOLIDWORKS PDM\EdmInterface.dll')
import comtypes.gen._5FA2C692_8393_4F31_9BDB_05E6F807D0D3_0_5_22 as pdm_lib2
vault_name = 'vault_name'
folder_path = 'some_folder_path'
def connect_to_vault(vault_name, lib = 'comtypes'):
if lib == 'comtypes':
vault = cc.CreateObject('ConisioLib.EdmVault.1')
vault.LoginAuto(vault_name, 0)
else:
vault = win32com.client.dynamic.Dispatch('ConisioLib.EdmVault.1')
vault.LoginAuto(vault_name, 0)
return vault
def getrefs(vault, filenames, folder_path):
DocIDs = []
ProjIDs = []
for filename in filenames:
temp_ProjID = vault.GetFolderFromPath(folder_path)
temp_DocID = vault.GetFileFromPath(filename, temp_ProjID)[0] #this fails when I use a comtypes generated vault
DocIDs.append(temp_DocID.ID)
ProjIDs.append(temp_ProjID.ID)
print('Document and Project IDs pulled')
return DocIDs, ProjIDs
vault = connect_to_vault(vault_name)
ref_vault =connect_to_vault(vault_name, lib = 'win32com')
filenames = [folder_path + s for s in os.listdir(folder_path)]
DocIDs, ProjIDs = getrefs(ref_vault, filenames, folder_path)
#Using Comtypes to update files
VarIDs = [54] * len(DocIDs) #Updating description only
var_values = [['foo' + str(s)] for s in range(len(DocIDs))] #dummy values for now
update_vars = vault.CreateUtility(2) #create instance of BatchUpdate
for i, file in enumerate(DocIDs):
update_vars.SetVar(file, VarIDs[i], var_values[i], '', 1)
pdm_error = [pdm_lib2.EdmBatchError2()] * len(DocIDs)
update_vars.CommitUpdate([pdm_error])
When I call update_vars.CommitUpdate([pdm_error]), I get the following error:
ArgumentError: argument 2: <class 'AttributeError'>: 'list' object has no attribute 'QueryInterface'
I'm not sure why this method is expecting an object with a 'QueryInterface' attribute--I'm only passing it a list of structs, not a full COM object like my file vault. I also tried using win32com to execute the method:
update_vars = ref_vault.CreateUtility(2) #create instance of BatchUpdate, use win32com instead
for i, file in enumerate(DocIDs):
update_vars.SetVar(file, VarIDs[i], var_values[i], '', 1)
pdm_error = [pdm_lib2.EdmBatchError2()] * len(DocIDs)
update_vars.CommitUpdate([pdm_error])
And now I get this error:
Traceback (most recent call last):
File "<ipython-input-222-0c49fb0861b9>", line 7, in <module>
update_vars.CommitUpdate([pdm_error])
File "D:\Users\apreacher\Documents\Shared Files\Python\Webscraping_projects\Helper Modules\pdm_lib.py", line 1500, in CommitUpdate
, poCallback)
File "C:\Users\apreacher\AppData\Local\Continuum\anaconda3\lib\site-packages\win32com\client\__init__.py", line 467, in _ApplyTypes_
self._oleobj_.InvokeTypes(dispid, 0, wFlags, retType, argTypes, *args),
MemoryError: CreatingSafeArray
And this is where I'm stuck. I haven't been able to make any headway on getting the CommitUpdate method to work properly. I also have the method definitions from the files generated by makepy.py and comtypes, but I don't really know how to interpret them:
makepy.py method definition:
def CommitUpdate(self, ppoRetErrors=pythoncom.Missing, poCallback=0):
'method Commit'
return self._ApplyTypes_(3, 1, (3, 0), ((24612, 2), (9, 49)), 'CommitUpdate', None,ppoRetErrors
, poCallback)
comtypes generated file:
COMMETHOD([dispid(3), helpstring('method Commit')], HRESULT, 'CommitUpdate',
( ['out'], POINTER(_midlSAFEARRAY(EdmBatchError2)), 'ppoRetErrors' ),
( ['in', 'optional'], POINTER(IEdmCallback), 'poCallback', 0 ),
( ['out', 'retval'], POINTER(c_int), 'plErrorCount' )),
Any ideas?
What version of PDM are you using? (I am on Pro 2019 sp4.)
I just noticed an inconsistency in your code. vault.CreateUtility(2) would (in .NET) return an object of type IEdmBatchUpdate. 4 lines later, you are calling the method CommitUpdate which only exists for the newer API IEdmBatchUpdate2 (see https://help.solidworks.com/2020/english/api/epdmapi/EPDM.Interop.epdm~EPDM.Interop.epdm.IEdmBatchUpdate2.html ).
I can tell you for certain under C#, this would require casting the result of CreatUtility to the proper object type.
It seems you are looking for a way to automate entry of data into file datacards.
Have you tried using data import rules? https://help.solidworks.com/2020/English/EnterprisePDM/Admin/c_Working_With_Variable_Values_overview.htm
I'm using chef server, configuring few different nodes / environments.
when asking for env attributes using the pychef api, few times in a row (when refreshing a web page using python server calling chef server) im getting ChefServerNotFoundError (the first few times are fine, and third exception is raised)
I guess that there is kind of firewall / anti ddos attacks on this server, but i can not figure out how to edit these settings.
anyone have any idea?
this is a part of the method (that is called 3 times and throws an exception):
env_nodes = Search('node').query('chef_environment: {0}'.format(env_name))
nodes_dict = {}
for n in env_nodes:
node = Node(n['name'])
nodes_dict[node.name] = node['ipaddress']`
and this is the traceback:
File "C:\env\lib\site-packages\chef\search.py", line 91, in __getitem__
row_value = self.data['rows'][value]
File "C:\env\lib\site-packages\chef\search.py", line 59, in data
self._data = self.api[self.url]
TypeError: 'NoneType' object is not subscriptable`
When using PyChef in a webapp or other multi-threaded system you should really pass in the API object explicitly. There is a system to track a default API target in a threadlocal for the purposes of making simple scripts easier, but in retrospect this was probably a mistake as it leads to these confusing issues. This would be a better version of that code, also faster:
nodes_dict = {row.object.name: row.object['ipaddress'] for row in Search('node', 'chef_environment:{}'.format(env_name), api=api)}
Where api is the return value of chef.autoconfigure() or some other ChefAPI object.
I am trying to create a custom filter for OpenStack, using their FilterScheduler component. The documentation for the FilterScheduler is here: http://docs.openstack.org/developer/nova/devref/filter_scheduler.html#
Now, there is not much in the way of documentation for creating your own custom filter. Infact, the complete documentation is:
If you want to create your own filter you just need to inherit from BaseHostFilter and implement one method: host_passes. This method should return True if host passes the filter. It takes host_state (describes host) and filter_properties dictionary as the parameters.
As an example, nova.conf could contain the following scheduler-related settings:
--scheduler_driver=nova.scheduler.FilterScheduler
--scheduler_available_filters=nova.scheduler.filters.standard_filters
--scheduler_available_filters=myfilter.MyFilter
--scheduler_default_filters=RamFilter,ComputeFilter,MyFilter
I have created a custom "test_filter.py" -- it is very similar to "all_hosts_filter.py", which is the simplest standard filter.
Here it is in it's entirety:
from nova.scheduler import filters
from nova.openstack.common import log as logging
LOG = logging.getLogger(__name__)
class TestFilter(filters.BaseHostFilter):
"""NOOP host filter. Returns all hosts."""
def host_passes(self, host_state, filter_properties):
LOG.debug("COMING FROM: nova/scheduler/filters/test_filter.py")
return True
But when I put this file, "test_filter.py", in the nova/scheduler/filters folder and restart OpenStack I get the following exception:
CRITICAL nova [-] Class test_filter could not be found: 'module' object has no attribute 'test_filter'
It appears that OpenStack is registering and trying to import my new filter, but some error is occuring. For reference, this is what the releveant sections of my /etc/nova/nova.conf file looks like:
scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_available_filters=nova.scheduler.filters.test_filter.TestFilter
scheduler_default_filters=TestFilter,RamFilter,ComputeFilter
======
UPDATE: 15th April 2000 hours BST.
An update to this question, still struggling. After discussing the problem with boris-42 on the OpenStack IRC channel we have investigated a bit more:
Openstack-scheduler is run as service from /usr/bin/nova-scheduler
It then has an error:
"Inner Exception: 'module' object has no attribute 'test_filter' from (pid=32696) import_class /usr/lib/python2.7/dist-packages/nova/utils.py:78"
Which suggests it is using the /usr/lib/python2.7/dist-packages/nova/ folder for the source files of the installation.
Putting my custom "test_filter.py" into /usr/lib/python2.7/dist-packages/nova/scheduler/filters causes the error above.
However, on closer inspection it appears that all other files in the /usr/lib/python2.7/dist-packages/nova/scheduler/filters folder are actually links to the files in /usr/share/pyshared/nova/scheduler/filters
So I put my "test_filter.py" in /usr/share/pyshared/nova/scheduler/filters -- and then created a symbolic link in the original folder.
This results in exactly the same folder. As long as the file is either present or a link exists in the folder /usr/lib/python2.7/dist-packages/nova/scheduler/filters then the error occurs.
The nova.conf file has been updated as follows:
scheduler_available_filters=nova.scheduler.filters.TestFilter
scheduler_default_filters=TestFilter
I dont think you have to put your file in /usr/lib/python2.7/dist-packages/nova/scheduler/filters. You can put anywhere and make sure that path is in PYTHONPATH.
As metnion in example
If you want to create your own filter you just need to inherit from BaseHostFilter and implement one method: host_passes. This method should return True if host passes the filter. It takes host_state (describes host) and filter_properties dictionary as the parameters.
As an example, nova.conf could contain the following scheduler-related settings:
......
--scheduler_available_filters=myfilter.MyFilter
.......
You have to mention myfilet.MyFilter without nova.scheduler.filters.
I have a question regarding contact creation using the python Google data API.
I am trying the example for contact creation with python, exactly like it is in the documentation page (https://developers.google.com/google-apps/contacts/v3/#creating_contacts)
So, i created the client as following:
email='<my gmail uid>'
password='<my gmail pwd>'
gd_client = gdata.contacts.client.ContactsClient(source='GoogleInc-ContactsPythonSample-1')
try:
gd_client.ClientLogin(email, password, gd_client.source)
except gdata.client.BadAuthentication:
print 'Invalid user credentials given.'
gd_client = None
Then i executed the function using:
create_contact(gd_client)
What i get from this call is:
Traceback (most recent call last):
File "<ipython console>", line 1, in <module>
File "<ipython console>", line 23, in create_contact
AttributeError: 'module' object has no attribute 'PostCode'
So I want to ask whether i am doing something wrong, whether this is a known bug, or whether the documentation is simply outdated.
Thanks.
p.s. a small comment, i think a better wrapping of the Google data API in the python library could be useful. I spent significant time in finding, within the API implementation, what fields should be set (directly!) and what classes should be used to assign them.
Ok, it seems that i can answer my question, which is partly a duplicate of
how to create google contact?
It turns out that the sample code in the documentation is invalid (thanks, Google)
PostCode should be Postcode, and the instant messaging address is also incorrect.
Removing that, it completes successfully