I'm following instructions from Google Developers guide in order to create custom message option. I have used their example but I've received an error:
Traceback (most recent call last):
File "test_my_opt.py", line 2, in <module>
value = my_proto_file_pb2.MyMessage.DESCRIPTOR.GetOptions().Extensions[my_proto_file_pb2.my_option]
File "(...)\google\protobuf\internal\python_message.py", line 1167, in __getitem__
_VerifyExtensionHandle(self._extended_message, extension_handle)
File "(...)\google\protobuf\internal\python_message.py", line 170, in _VerifyExtensionHandle
message.DESCRIPTOR.full_name))
KeyError: 'Extension "my_option" extends message type "google.protobuf.MessageOptions", but this message is of type "google.protobuf.MessageOptions".'
I simply used following code:
import my_proto_file_pb2
value = my_proto_file_pb2.MyMessage.DESCRIPTOR.GetOptions().Extensions[my_proto_file_pb2.my_option]
And this proto file:
import "beans-protobuf/proto/src/descriptor.proto";
extend google.protobuf.MessageOptions {
optional string my_option = 51234;
}
message MyMessage {
option (my_option) = "Hello world!";
}
Everything like in guide... so how should I access this option without error?
import "beans-protobuf/proto/src/descriptor.proto";
I think this is the problem. The correct import statement for descriptor.proto is:
import "google/protobuf/descriptor.proto";
The path string is important because you need to be extending the original definitions of the descriptor types, not some copy of them. google/protobuf/descriptor.proto becomes the module google.protobuf.descriptor_pb2 in Python, and the Protobuf library expects that any custom options are extensions to the types in there. But you are actually extending beans-protobuf/proto/src/descriptor.proto, which becomes beans_protobuf.proto.src.descriptor_pb2 in Python, which is a completely different module! Hence, the protobuf library gets confused and doesn't think these extensions are applicable to protobuf descriptors.
I think if you just change the import statement, everything should work. When protobuf is correctly installed, google/protobuf/descriptor.proto should always work as an import -- there's no need to provide your own copy of the file.
Related
I'm writing a simple email filter to work upon Outlook incoming messages on Windows 10, and seek to code it up in Python using the win32com library, under Anaconda. I also seek to avoid using magic numbers for the "Inbox" as I see in other examples, and would rather use constants that should be defined under win32com.client.constants. But I'm running into simple errors that are surprising:
So, I concocted the following simple code, loosely based upon https://stackoverflow.com/a/65800130/257924 :
import sys
import win32com.client
try:
outlookApp = win32com.client.Dispatch("Outlook.Application")
except:
print("ERROR: Unable to load Outlook")
sys.exit(1)
outlook = outlookApp.GetNamespace("MAPI")
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
print("ofContacts", type(ofContacts))
sys.exit(0)
Running that under an Anaconda-based installer (Anaconda3 2022.10 (Python 3.9.13 64-bit)) on Windows 10 errors out with:
(base) c:\Temp>python testing.py
Traceback (most recent call last):
File "c:\Temp\testing.py", line 11, in <module>
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
File "C:\Users\brentg\Anaconda3\lib\site-packages\win32com\client\__init__.py", line 231, in __getattr__
raise AttributeError(a)
AttributeError: olFolderContacts
Further debugging indicates that the __dicts__ property is referenced by the __init__.py in the error message above. See excerpt of that class below. For some reason, that __dicts__ is an empty list:
class Constants:
"""A container for generated COM constants."""
def __init__(self):
self.__dicts__ = [] # A list of dictionaries
def __getattr__(self, a):
for d in self.__dicts__:
if a in d:
return d[a]
raise AttributeError(a)
# And create an instance.
constants = Constants()
What is required to have win32com properly initialize that constants object?
The timestamps on the init.py file show 10/10/2021 in case that is relevant.
The short answer is to change:
outlookApp = win32com.client.Dispatch("Outlook.Application")
to
outlookApp = win32com.client.gencache.EnsureDispatch("Outlook.Application")
The longer answer is that win32com can work with COM interfaces in one of two ways: late- and early-binding.
With late-binding, your code knows nothing about the Dispatch interface ie. doesn't know which methods, properties or constants are available. When you call a method on the Dispatch interface, win32com doesn't know if that method exists or any parameters: it just sends what it is given and hopes for the best!
With early-binding, your code relies on previously-captured information about the Dispatch interface, taken from its Type Library. This information is used to create local Python wrappers for the interface which know all the methods and their parameters. At the same time it populates the Constants dictionary with any constants/enums contained in the Type Library.
win32com has a catch-all win32com.client.Dispatch() function which will try to use early-binding if the local wrapper files are present, otherwise will fall back to using late-binding. My problem with the package is that the caller doesn't always know what they are getting, as in the OP's case.
The alternative win32com.client.gencache.EnsureDispatch() function enforces early-binding and ensures any constants are available. If the local wrapper files are not available, they will be created (you might find them under %LOCALAPPDATA%\Temp\gen_py\xx\CLSID where xx is the Python version number, and CLSID is the GUID for the Type Library). Once these wrappers are created once then the generic win32com.client.Dispatch() will use these files.
I'm a network engineer, new to programming, so if I leave out any details do let me know. Trying my best to explain the problem here :) Getting a solution to this problem is very important for me, so any inputs would be highly appreciated.
Problem Statement:
I wrote a python script, which is used by a brython script (used in html like javascript, converted to javascript at compile time). Basically when I click on a button on my webpage it triggers a python script which I have mentioned in my html like this:
*<body onload="brython()">
<script type="text/python" src="ip_tools.py"></script>*
The python script looks like this:
from browser import document as doc, bind
import ipaddress
def subcalc(ev):
#Using the ipaddress module I get the ip_network, so the value ip will be like 192.168.1.0/24 (an object of ipaddress class)
ip = ipaddress.ip_network(doc['ipadd'].value + '/' + doc['ipv4_mask'].value, strict=False)
#This line gives the aforementioned error in Chrome Console
*print(ip.hostmask)*
doc["sub_cal"].bind("click", subcalc) #This line triggers the subcalc function when user clicks on a button with id-"sub_cal" on the webpage.
The complete error from Chrome Console:
error in get.apply Error
at _b_.TypeError.$factory (eval at $make_exc (brython.js:7647), <anonymous>:161:327)
at __get__491 (eval at exec_module (brython.js:8991), <anonymous>:5898:62)
at __BRYTHON__.builtins.object.object.__getattribute__ (brython.js:5332)
at Object.$B.$getattr (brython.js:6731)
at subcalc0 (eval at $B.loop (brython.js:5230), <anonymous>:120:48)
at HTMLButtonElement.<anonymous> (brython.js:12784) brython.js:5334 get attr hostmask of Object brython.js:5335 function () { [native code] } brython.js:6108 Traceback (most recent call last): TypeError: Cannot use cached_property instance without calling __set_name__ on it.
I have similar script working fine, the only difference is that this script uses an imported Python library other scripts do not use such libraries, except the browser library (which is required by Brython to work with html content)
Thanks for reporting this, it was a bug in the implementation of PEP 487 in Brython. It is fixed in this commit.
My goal is to automate configuring firewalls on CentOS 7 machines using Python.
The OS comes with firewalld, so that's what I'm using. I looked into it and found that it uses dbus (I've never heard of or dealt with any of this - please correct me if anything I say is incorrect.)
I found this documentation for how to control dbus processes using Python:
http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.txt
I checked and the version of Python that comes with the OS includes the dbus module, so it seems like a promising start.
That document suggests that I needed to learn more about what firewalld exposes via the dbus interface. So I did some more research and found this:
https://www.mankier.com/5/firewalld.dbus
The first document says I need to start out with a "well-known name". Their example for such a thing was org.freedesktop.NetworkManager. The second document is titled firewalld.dbus, so I figured that was as good a name as any to try since the document doesn't explicitly give a name anywhere else.
The first document also says I need a name for an object path. Their example is /org/freedesktop/NetworkManager. The second document has an object path of /org/fedoraproject/FirewallD1.
I put those together and tried using the first method the first document suggested, SystemBus's get_object():
>>> from dbus import SystemBus
>>> bus = SystemBus()
>>> proxy = bus.get_object('firewalld.dbus', '/org/fedoraproject/FirewallD1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 241, in get_object
follow_name_owner_changes=follow_name_owner_changes)
File "/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 248, in __init__
self._named_service = conn.activate_name_owner(bus_name)
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 180, in activate_name_owner
self.start_service_by_name(bus_name)
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 278, in start_service_by_name
'su', (bus_name, flags)))
File "/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in call_blocking
message, timeout)
dbus.exceptions.DBusException:
org.freedesktop.DBus.Error.ServiceUnknown:
The name firewalld.dbus was not provided by any .service files
I also gave org.fedoraproject.FirewallD1 a try as the first parameter but ended up with a similar error message.
Why are these not working? Is there some way I can discover what the proper names are? It mentions ".service files" at the end of the error message... where would such a file be located?
Edit: Found several ".service files" by using find / -name *.service. One of them is at /usr/lib/systemd/system/firewalld.service... seems pretty promising so I'll check it out.
Edit 2: It's a rather short file... only about 10 lines. One of them says BusName=org.fedoraproject.FirewallD1. So I'm not sure why it said the name was not provided by any .service files... unless it's not using this file for some reason?
If the unit file says:
BusName=org.fedoraproject.FirewallD1
Then maybe you should try using that as your bus name:
>>> import dbus
>>> bus = dbus.SystemBus()
>>> p = bus.get_object('org.fedoraproject.FirewallD1', '/org/fedoraproject/FirewallD1')
>>> p.getDefaultZone()
dbus.String(u'FedoraWorkstation')
I figured this out based on the fact that this:
>>> help(bus.get_object)
Says that the get_object call looks like:
get_object(self, bus_name, object_path, introspect=True, follow_name_owner_changes=False, **kwargs)
I am trying to get Network Statistics for my Windows 7 system using PyWin32.
The steps I followed:
1) Run COM MakePy utility and than select network list manager 1.0
type library under type library.
2) Above process generated this python file.
Now the problem I am facing is after the above two steps what should be my next step. I tried a couple of things like:
I copied the CLSID = IID('{DCB00000-570F-4A9B-8D69-199FDBA5723B}') line from the above generated python file and used it like
>>> import win32com
>>> obj = win32com.client.gencache.GetClassForCLSID("{DCB00000-570F-4A9B-8D69-199FDBA5723B}")
>>> obj.GetConnectivity()
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
TypeError: unbound method GetConnectivity() must be called with INetworkListManager instance as first argument (got nothing instead)
When I do obj.method() it show a list of all available method.
So, now I have no idea what to do or how to proceed and what is the general process of using Type library with pywin32.
The above task is just a part of learning process on how to use PyWin32,COM MakePy utility.
Is this even achievable using pywin32.?
You'll need to use win32com.client.Dispatch to actually create the object.
Also, the class you start with is the CoClass, in this case
class NetworkListManager(CoClassBaseClass): # A CoClass
is the one you want.
win32com.client.Dispatch('{DCB00C01-570F-4A9B-8D69-199FDBA5723B}')
works here.
Many of these Dispatch classes have a human readable dotted name as an alias, although
this particular one doesn't seem to.
I want to disallow access to file system from clients code, so I think I could overwrite open function
env = {
'open': lambda *a: StringIO("you can't use open")
}
exec(open('user_code.py'), env)
but I got this
unqualified exec is not allowed in function 'my function' it contains a
nested function with free variables
I also try
def open_exception(*a):
raise Exception("you can't use open")
env = {
'open': open_exception
}
but got the same Exception (not "you can't use open")
I want to prevent of:
executing this:
"""def foo():
return open('some_file').read()
print foo()"""
and evaluate this
"open('some_file').write('some text')"
I also use session to store code that was evaluated previously so I need to prevent of executing this:
"""def foo(s):
return open(s)"""
and then evaluating this
"foo('some').write('some text')"
I can't use regex because someone could use (eval inside string)
"eval(\"opxx('some file').write('some text')\".replace('xx', 'en')"
Is there any way to prevent access to file system inside exec/eval? (I need both)
There's no way to prevent access to the file system inside exec/eval. Here's an example code that demonstrates a way for the user code to call otherwise restricted classes that always works:
import subprocess
code = """[x for x in ().__class__.__bases__[0].__subclasses__()
if x.__name__ == 'Popen'][0](['ls', '-la']).wait()"""
# Executing the `code` will always run `ls`...
exec code in dict(__builtins__=None)
And don't think about filtering the input, especially with regex.
You might consider a few alternatives:
ast.literal_eval if you could limit yourself only to simple expressions
Using another language for user code. You might look at Lua or JavaScript - both are sometimes used to run unsafe code inside sandboxes.
There's the pysandbox project, though I can't guarantee you that the sandboxed code is really safe. Python wasn't designed to be sandboxed, and in particular the CPython implementation wasn't written with sandboxing in mind. Even the author seems to doubt the possibility to implement such sandbox safely.
You can't turn exec() and eval() into a safe sandbox. You can always get access to the builtin module, as long as the sys module is available::
sys.modules[().__class__.__bases__[0].__module__].open
And even if sys is unavailable, you can still get access to any new-style class defined in any imported module by basically the same way. This includes all the IO classes in io.
This actually can be done.
That is, practically just what you describe can be accomplished on Linux, contrary to other answers here. That is, you can achieve a setup where you can have an exec-like call which runs untrusted code under security which is reasonably difficult to penetrate, and which allows output of the result. Untrusted code is not allowed to access the filesystem at all except for reading specifically allowed parts of the Python vm and standard library.
If that's close enough to what you wanted, read on.
I'm envisioning a system where your exec-like function spawns a subprocess under a very strict AppArmor profile, such as the one used by Straitjacket (see here and here). This will limit all filesystem access at the kernel level, other than files specifically allowed to be read. This will also limit the process's stack size, max data segment size, max resident set size, CPU time, the number of signals that can be queued, and the address space size. The process will have locked memory, cores, flock/fcntl locks, POSIX message queues, etc, wholly disallowed. If you want to allow using size-limited temporary files in a scratch area, you can mkstemp it and make it available to the subprocess, and allow writes there under certain conditions (make sure that hard links are absolutely disallowed). You'd want to make sure to clear out anything interesting from the subprocess environment and put it in a new session and process group, and close all FDs in the subprocess except for the stdin/stdout/stderr, if you want to allow communication with those.
If you want to be able to get a Python object back out from the untrusted code, you could wrap it in something which prints the result's repr to stdout, and after you check its size, you evaluate it with ast.literal_eval(). That pretty severely limits the possible types of object that can be returned, but really, anything more complicated than those basic types probably carries the possibility of sekrit maliciousness intended to be triggered within your process. Under no circumstances should you use pickle for the communication protocol between the processes.
As #Brian suggest overriding open doesn't work:
def raise_exception(*a):
raise Exception("you can't use open")
open = raise_exception
print eval("open('test.py').read()", {})
this display the content of the file but this (merging #Brian and #lunaryorn answers)
import sys
def raise_exception(*a):
raise Exception("you can't use open")
__open = sys.modules['__builtin__'].open
sys.modules['__builtin__'].open = raise_exception
print eval("open('test.py').read()", {})
will throw this:
Traceback (most recent call last):
File "./test.py", line 11, in <module>
print eval("open('test.py').read()", {})
File "<string>", line 1, in <module>
File "./test.py", line 5, in raise_exception
raise Exception("you can't use open")
Exception: you can't use open
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/apport_python_hook.py", line 48, in apport_excepthook
if not enabled():
File "/usr/lib/python2.6/dist-packages/apport_python_hook.py", line 23, in enabled
conf = open(CONFIG).read()
File "./test.py", line 5, in raise_exception
raise Exception("you can't use open")
Exception: you can't use open
Original exception was:
Traceback (most recent call last):
File "./test.py", line 11, in <module>
print eval("open('test.py').read()", {})
File "<string>", line 1, in <module>
File "./test.py", line 5, in raise_exception
raise Exception("you can't use open")
Exception: you can't use open
and you can access to open outside user code via __open
"Nested function" refers to the fact that it's declared inside another function, not that it's a lambda. Declare your open override at the top level of your module and it should work the way you want.
Also, I don't think this is totally safe. Preventing open is just one of the things you need to worry about if you want to sandbox Python.