check if PIE is enable in python - python

Here since yesterday I look at how to detect if the protection "PIE" is activated. For that I analyzed the output of the relocation entries to see if _ITM_deregisterTMClone is present or not. Is there a better way to detect PIE without going through a readelf output?
Here is what I currently have :
def display_pie(counter):
if (counter == 1):
print("Pie : Enable")
else:
print("Pie: No PIE")
def check_file_pie(data_file):
data = []
data2 = []
result = []
ctn = 0
check = subprocess.Popen(["readelf", "-r", data_file],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = check.stdout.readlines()
for x in result:
data.append(list(x))
for lines in data:
data2.append("".join(map(chr, lines)))
for new_lines in data2:
if "_ITM_deregisterTMClone" in new_lines:
ctn += 1
display_pie(ctn)
Thank you it's quite technical so if someone can explain me a better way to check the Executable Independent Position, I'm interested!

You can use pyelftools to check if the ELF is a shared object and if the image base address is zero:
def is_pie(filename):
from elftools.elf.elffile import ELFFile
with open(filename, 'rb') as file:
elffile = ELFFile(file)
base_address = next(seg for seg in elffile.iter_segments() if seg['p_type'] == "PT_LOAD")['p_vaddr']
return elffile.elftype == 'DYN' and base_address == 0

You can use pwntools, which has functionality for manipulating ELF files. Example usage:
>>> from pwn import *
>>> e = ELF('your-elf-file')
>>> e.pie
True
If you want to know how it is implemented, you can find the source code here.

Related

Very large pandas dataframe - keeping count

Assume this is a sample of my data: dataframe
the entire dataframe is stored in a csv file (dataframe.csv) that is 40GBs so I can't open all of it at once.
I am hoping to find the most dominant 25 names for all genders. My instinct is to create a for loop that runs through the file (because I can't open it at once), and have a python dictionary that holds the counter for each name (that I will increment as I go through the data).
To be honest, I'm confused on where to even start with this (how to create the dictionary, since to_dict() does not appear to do what I'm looking for). And also, if this is even a good solution? Is there a more efficient way someone can think of?
SUMMARY -- sorry if the question is a bit long:
the csv file storing the data is very big and I can't open it at once, but I'd like to find the top 25 dominant names in the data. Any ideas on what to do and how to do it?
I'd appreciate any help I can get! :)
Thanks for your interesting task! I've implemented pure numpy + pandas solution. It uses sorted array to keep names and counts. Hence algorithm should be around O(n * log n) complexity.
I didn't any hash table in numpy, hash table definitely would be faster (O(n)). Hence I used existing sorting/inserting routines of numpy.
Also I used .read_csv() from pandas with iterator = True, chunksize = 1 << 24 params, this allows reading file in chunks and producing pandas dataframes of fixed size from each chunk.
Note! In the first runs (until program is debugged) set limit_chunks (number of chunks to process) in code to small value (like 5). This is to check that whole program runs correctly on partial data.
Program needs to run one time command python -m pip install pandas numpy to install these 2 packages if you don't have them.
Progress is printed once in a while, total megabytes done plus speed.
Result will be printed to console plus saved to res_fname file name, all constants configuring script are placed in the beginning of script. topk constant controls how many top names will be outputed to file/console.
Interesting how fast is my solution. If it is to slow maybe I devote some time to write nice HashTable class using pure numpy.
You can also try and run next code here online.
import os, math, time, sys
# Needs: python -m pip install pandas numpy
import pandas as pd, numpy as np
import pandas, numpy
fname = 'test.csv'
fname_res = 'test.res'
chunk_size = 1 << 24
limit_chunks = None # Number of chunks to process, set to None if to process whole file
all_genders = ['Male', 'Female']
topk = 1000 # How many top names to output
progress_step = 1 << 23 # in bytes
fsize = os.path.getsize(fname)
#el_man = enlighten.get_manager() as el_man
#el_ctr = el_man.counter(color = 'green', total = math.ceil(fsize / 2 ** 20), unit = 'MiB', leave = False)
tables = {g : {
'vals': np.full([1], chr(0x10FFFF), dtype = np.str_),
'cnts': np.zeros([1], dtype = np.int64),
} for g in all_genders}
tb = time.time()
def Progress(
done, total = min([fsize] + ([chunk_size * limit_chunks] if limit_chunks is not None else [])),
cfg = {'progressed': 0, 'done': False},
):
if not cfg['done'] and (done - cfg['progressed'] >= progress_step or done >= total):
if done < total:
while cfg['progressed'] + progress_step <= done:
cfg['progressed'] += progress_step
else:
cfg['progressed'] = total
sys.stdout.write(
f'{str(round(cfg["progressed"] / 2 ** 20)).rjust(5)} MiB of ' +
f'{str(round(total / 2 ** 20)).rjust(5)} MiB ' +
f'speed {round(cfg["progressed"] / 2 ** 20 / (time.time() - tb), 4)} MiB/sec\n'
)
sys.stdout.flush()
if done >= total:
cfg['done'] = True
with open(fname, 'rb', buffering = 1 << 26) as f:
for i, df in enumerate(pd.read_csv(f, iterator = True, chunksize = chunk_size)):
if limit_chunks is not None and i >= limit_chunks:
break
if i == 0:
name_col = df.columns.get_loc('First Name')
gender_col = df.columns.get_loc('Gender')
names = np.array(df.iloc[:, name_col]).astype('str')
genders = np.array(df.iloc[:, gender_col]).astype('str')
for g in all_genders:
ctab = tables[g]
gnames = names[genders == g]
vals, cnts = np.unique(gnames, return_counts = True)
if vals.size == 0:
continue
if ctab['vals'].dtype.itemsize < names.dtype.itemsize:
ctab['vals'] = ctab['vals'].astype(names.dtype)
poss = np.searchsorted(ctab['vals'], vals)
exist = ctab['vals'][poss] == vals
ctab['cnts'][poss[exist]] += cnts[exist]
nexist = np.flatnonzero(exist == False)
ctab['vals'] = np.insert(ctab['vals'], poss[nexist], vals[nexist])
ctab['cnts'] = np.insert(ctab['cnts'], poss[nexist], cnts[nexist])
Progress(f.tell())
Progress(fsize)
with open(fname_res, 'w', encoding = 'utf-8') as f:
for g in all_genders:
f.write(f'{g}:\n\n')
print(g, '\n')
order = np.flip(np.argsort(tables[g]['cnts']))[:topk]
snames, scnts = tables[g]['vals'][order], tables[g]['cnts'][order]
if snames.size > 0:
for n, c in zip(np.nditer(snames), np.nditer(scnts)):
n, c = str(n), int(c)
if c == 0:
continue
f.write(f'{c} {n}\n')
print(c, n.encode('ascii', 'replace').decode('ascii'))
f.write(f'\n')
print()
import pandas as pd
df = pd.read_csv("sample_data.csv")
print(df['First Name'].value_counts())
The second line will convert your csv into a pandas dataframe and the third line should print the occurances of each name.
https://dfrieds.com/data-analysis/value-counts-python-pandas.html
This doesn't seem to be a case where pandas is really going to be an advantage. But if you're committed to going down that route, change the read_csv chunksize paramater, then filter out the useless columns.
Perhaps consider using a different set of tooling such as a database or even vanilla python using a generator to populate a dict in the form of name:count.

How to use Higher Order Functions with csvReader in Python

Hi I am trying to learn higher order functions (HOF's) in python. I understand their simple uses for reduce, map and filter. But here I need to create a tuple of the stations the bikes came and went from with the number of events at those stations as the second value. Now the commented out code is this done with normal functions (I left it as a dictionary but thats easy to convert to a tuple).
But I've been racking my brain for a while and cant get it to work using HOF's. My idea right now is to somehow use map to go through the csvReader and add to the dictionary. For some reason I cant understand what to do here. Any help understanding how to use these functions properly would be helpful.
import csv
#def stations(reader):
# Stations = {}
# for line in reader:
# startstation = line['start_station_name']
# endstation = line['end_station_name']
# Stations[startstation] = Stations.get(startstation, 0) + 1
# Stations[endstation] = Stations.get(endstation, 0) + 1
# return Stations
Stations = {}
def line_list(x):
l = x['start_station_name']
l2 = x['end_station_name']
Stations[l] = Stations.get(l, 0) + 1
Stations[l2] = Stations.get(l2, 0) + 1
return dict(l,l2)
with open('citibike.csv', 'r') as fi:
reader = csv.DictReader(fi)
#for line in reader:
output = list(map(line_list,reader))
#print(listmap)
#output1[:10]
print(output)
list(map(...)) creates a list of results, not a dictionary.
If you want to fill in a dictionary, you can use reduce(), using the dictionary as the accumulator.
from functools import reduce
def line_list(Stations, x):
l = x['start_station_name']
l2 = x['end_station_name']
Stations[l] = Stations.get(l, 0) + 1
Stations[l2] = Stations.get(l2, 0) + 1
return Stations
with open('citibike.csv', 'r') as fi:
reader = csv.DictReader(fi)
result = reduce(line_list, reader, {})
print(result)

Python Trace Tables and ver 2 to ver 3 issue

I found this code for printing a program trace and it works fine in Python2.
However, in Python 3 there are issues. I addressed the first one by replacing execfile(file_name) with exec(open(filename).read()), but now there is still an error of KeyError: 'do_setlocale'
I'm out of my depth here - I just want an easy way to trace variables in programs line by line - I like the way this program works and it would be great to get it working with Python 3. I even tried an online conversion program but got the same KeyError: 'do_setlocale'
Can anyone please help me to get it working?
import sys
if len(sys.argv) < 2:
print __doc__
exit()
else:
file_name = sys.argv[1]
past_locals = {}
variable_list = []
table_content = ""
ignored_variables = set([
'file_name',
'trace',
'sys',
'past_locals',
'variable_list',
'table_content',
'getattr',
'name',
'self',
'object',
'consumed',
'data',
'ignored_variables'])
def trace(frame, event, arg_unused):
global past_locals, variable_list, table_content, ignored_variables
relevant_locals = {}
all_locals = frame.f_locals.copy()
for k,v in all_locals.items():
if not k.startswith("__") and k not in ignored_variables:
relevant_locals[k] = v
if len(relevant_locals) > 0 and past_locals != relevant_locals:
for i in relevant_locals:
if i not in past_locals:
variable_list.append(i)
table_content += str(frame.f_lineno) + " || "
for variable in variable_list:
table_content += str(relevant_locals[variable]) + " | "
table_content = table_content[:-2]
table_content += '\n'
past_locals = relevant_locals
return trace
sys.settrace(trace)
execfile(file_name)
table_header = "L || "
for variable in variable_list:
table_header += variable + ' | '
table_header = table_header[:-2]
print table_header
print table_content
# python traceTable.py problem1.py
# problem1.py
a = 1
b = 2
a = a + b
That program has a couple of major flaws – for example, if the program being traced includes any functions with local variables, it will crash, even in Python 2.
Therefore, since I have nothing better to do, I wrote a program to do something like this called pytrace. It's written for Python 3.6, although it probably wouldn't take too long to make it work on lower versions if you need to.
Its output is a little different to your program's, but not massively so – the only thing that's missing is line numbers, which I imagine you could add in fairly easily (print frame.f_lineno at appropriate points). The rest is purely how the data are presented (your program stores all the output until the end so it can work out table headers, whereas mine prints everything as it goes).

Extraction and processing the data from txt file

I am beginner in python (also in programming)I have a larg file containing repeating 3 lines with numbers 1 empty line and again...
if I print the file it looks like:
1.93202838
1.81608154
1.50676177
2.35787777
1.51866227
1.19643624
...
I want to take each three numbers - so that it is one vector, make some math operations with them and write them back to a new file and move to another three lines - to another vector.so here is my code (doesnt work):
import math
inF = open("data.txt", "r+")
outF = open("blabla.txt", "w")
a = []
fin = []
b = []
for line in inF:
a.append(line)
if line.startswith(" \n"):
fin.append(b)
h1 = float(fin[0])
k2 = float(fin[1])
l3 = float(fin[2])
h = h1/(math.sqrt(h1*h1+k1*k1+l1*l1)+1)
k = k1/(math.sqrt(h1*h1+k1*k1+l1*l1)+1)
l = l1/(math.sqrt(h1*h1+k1*k1+l1*l1)+1)
vector = [str(h), str(k), str(l)]
outF.write('\n'.join(vector)
b = a
a = []
inF.close()
outF.close()
print "done!"
I want to get "vector" from each 3 lines in my file and put it into blabla.txt output file. Thanks a lot!
My 'code comment' answer:
take care to close all parenthesis, in order to match the opened ones! (this is very likely to raise SyntaxError ;-) )
fin is created as an empty list, and is never filled. Trying to call any value by fin[n] is therefore very likely to break with an IndexError;
k2 and l3 are created but never used;
k1 and l1 are not created but used, this is very likely to break with a NameError;
b is created as a copy of a, so is a list. But you do a fin.append(b): what do you expect in this case by appending (not extending) a list?
Hope this helps!
This is only in the answers section for length and formatting.
Input and output.
Control flow
I know nothing of vectors, you might want to look into the Math module or NumPy.
Those links should hopefully give you all the information you need to at least get started with this problem, as yuvi said, the code won't be written for you but you can come back when you have something that isn't working as you expected or you don't fully understand.

Database Compression in Python

I have hourly logs like
user1:joined
user2:log out
user1:added pic
user1:added comment
user3:joined
I want to compress all the flat files down to one file. There are around 30 million users in the logs and I just want the latest user log for all the logs.
My end result is I want to have a log look like
user1:added comment
user2:log out
user3:joined
Now my first attempt on a small scale was to just do a dict like
log['user1'] = "added comment"
Will doing a dict of 30 million key/val pairs have a giant memory footprint.. Or should I use something like sqllite to store them.. then just put the contents of the sqllite table back into a file?
If you intern() each log entry then you'll use only one string for each similar log entry regardless of the number of times it shows up, thereby lowering memory usage a lot.
>>> a = 'foo'
>>> b = 'foo'
>>> a is b
True
>>> b = 'f' + ('oo',)[0]
>>> a is b
False
>>> a = intern('foo')
>>> b = intern('f' + ('oo',)[0])
>>> a is b
True
You could also process the log lines in reverse -- then use a set to keep track of which users you've seen:
s = set()
# note, this piece is inefficient in that I'm reading all the lines
# into memory in order to reverse them... There are recipes out there
# for reading a file in reverse.
lines = open('log').readlines()
lines.reverse()
for line in lines:
line = line.strip()
user, op = line.split(':')
if not user in s:
print line
s.add(user)
The various dbm modules (dbm in Python 3, or anydbm, gdbm, dbhash, etc. in Python 2) let you create simple databases of key to value mappings. They are stored on the disk so there is no huge memory impact. And you can store them as logs if you wish to.
This sounds like the perfect kind of problem for a Map/Reduce solution. See:
http://en.wikipedia.org/wiki/MapReduce
Hadoop
for example.
Its pretty to easy to mock up the data structure to see how much memory it would take.
Something like this where you could change gen_string to generate data that would approximate the messages.
import random
from commands import getstatusoutput as gso
def gen_string():
return str(random.random())
d = {}
for z in range(10**6):
d[gen_string()] = gen_string()
print gso('ps -eo %mem,cmd |grep test.py')[1]
On a one gig netbook:
0.4 vim test.py
0.1 /bin/bash -c time python test.py
11.7 /usr/bin/python2.6 test.py
0.1 sh -c { ps -eo %mem,cmd |grep test.py; } 2>&1
0.0 grep test.py
real 0m26.325s
user 0m25.945s
sys 0m0.377s
... So its using about 10% of 1 gig for 100,000 records
But it would also depend on how much data redundancy you have ...
Thanks to #Ignacio for intern() -
def procLog(logName, userDict):
inf = open(logName, 'r')
for ln in inf.readlines():
name,act = ln.split(':')
userDict[name] = intern(act)
inf.close()
return userDict
def doLogs(logNameList):
userDict = {}
for logName in logNameList:
userDict = procLog(logName, userDict)
return userDict
def writeOrderedLog(logName, userDict):
keylist = userDict.keys()
keylist.sort()
outf = open(logName,'w')
for k in keylist:
outf.write(k + ':' + userDict[k])
outf.close()
def main():
mylogs = ['log20101214', 'log20101215', 'log20101216']
d = doLogs(mylogs)
writeOrderedLog('cumulativeLog', d)
the question, then, is how much memory this will consume.
def makeUserName():
ch = random.choice
syl = ['ba','ma','ta','pre','re','cu','pro','do','tru','ho','cre','su','si','du','so','tri','be','hy','cy','ny','quo','po']
# 22**5 is about 5.1 million potential names
return ch(syl).title() + ch(syl) + ch(syl) + ch(syl) + ch(syl)
ch = random.choice
states = ['joined', 'added pic', 'added article', 'added comment', 'voted', 'logged out']
d = {}
t = []
for i in xrange(1000):
for j in xrange(8000):
d[makeUserName()] = ch(states)
t.append( (len(d), sys.getsizeof(d)) )
which results in
(horizontal axis = number of user names, vertical axis = memory usage in bytes) which is... slightly weird. It looks like a dictionary preallocates quite a lot of memory, then doubles it every time it gets too full?
Anyway, 4 million users takes just under 100MB of RAM - but it actually reallocates around 3 million users, 50MB, so if the doubling holds, you will need about 800MB of RAM to process 24 to 48 million users.

Categories

Resources