I'm working on a Python script that takes Nessus data exported as CSV and removes duplicate data, however due to the way the exporting works results for different ports and protocols have their own unique row, even though all the other data in the row is the same. I need to remove these duplicates, but I want to keep the Port and Protocol column data and append it to the previous row.
Here is a very small CSV I'm using to test and build the script:
Screenshot of CSV File
As you can see all fields are the exact same apart from the port field and sometimes the protocol field will be different too, so I need to read both rows of the CSV file and then append the port like this: 80, 443 and the same with protocol: tcp, tcp
Then only save the one line to remove duplicate data, I have tried doing this by checking if there has already been an instance of the Plugin ID, however my output is only printing the second rows Port and Protocol.
protocollist = []
portlist = []
pluginid_list = []
multiple = False
with open(csv_file_input, 'rb') as csvfile:
nessusreader = csv.DictReader(csvfile)
for row in nessusreader:
pluginid = row['Plugin ID']
if pluginid != '':
pluginid_list.append(row['Plugin ID'])
print(pluginid_list)
count = pluginid_list.count(pluginid)
cve = row['CVE']
if count > 0:
protocollist.append(row['Protocol'])
print(protocollist)
portlist.append(row['Port'])
print(portlist)
print('Counted more than 1')
multiple = True
if multiple == True:
stringlist = ', '.join(protocollist)
newstring1 = stringlist
protocol = newstring1
stringlist2 = ', '.join(portlist)
newstring2 = stringlist2
port = newstring2
else:
protocol = row['Protocol']
port = row['Port']
cvss = row['CVSS']
risk = row['Risk']
host = row['Host']
name = row['Name']
synopsis = row['Synopsis']
description = row['Description']
solution = row['Solution']
seealso = row['See Also']
pluginoutput = row['Plugin Output']
with open(csv_file_output, 'w') as csvfile:
fieldnames = ['Plugin ID', 'CVE', 'CVSS', 'Risk', 'Host', 'Protocol', 'Port', 'Name', 'Synopsis', 'Description', 'Solution', 'See Also', 'Plugin Output']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerow({'Plugin ID': pluginid, 'CVE': cve, 'CVSS': cvss, 'Risk': risk, 'Host': host, 'Protocol': protocol, 'Port': port, 'Name': name, 'Synopsis': synopsis, 'Description': description, 'Solution': solution, 'See Also': seealso, 'Plugin Output': pluginoutput})
There are probably a few errors in the code as I've been trying different things, but just wanted to show the code I've been working on to give more context to the issue. This code works if the data is only as shown in the CSV as there are only two items, however I introduced a third set of data with a different Plugin ID and it then added that to the list also, probably due to the if statement being set to > 0.
Related
Working on a project creating a python flask website that stores user logins into a text file. I have a text file where each line is one user and each user has 5 parameters stored on the line. All user parameters are separated by a ; character.
Parameters are:
username
password
first name
last name
background color
title
avatar
Sample of the text file:
joebob;pass1;joe;bob;yellow;My title!!;https://upload.wikimedia.org/wikipedia/commons/c/cd/Stick_Figure.jpg
richlong;pass2;rich;long;blue;My title2!!;https://www.iconspng.com/images/stick-figure-walking/stick-figure-walking.jpg
How do I go about storing the parameters into a python array, and how do I access them later when I need to reference log-ins.
Here is what I wrote so far:
accounts = { }
def readAccounts():
file = open("assignment11-account-info.txt", "r")
for accounts in file: #line
tmp = accounts.split(';')
for data in tmp: #data in line
accounts[data[0]] = {
'user': data[0],
'pass': data[1],
'first': data[2],
'last': data[3],
'color': data[4],
'title': data[5],
'avatar': data[6].rstrip()
}
file.close()
You can use the python builtin csv to parse
import csv
with open("assignment11-account-info.txt", "r") as file:
reader = csv.reader(file, delimiter=';')
result = []
for row in reader:
fields = ('user', 'passwd', 'first', 'last', 'color','title','avatar')
res = dict(zip(fields, row))
result.append(res)
Or equivalent but harder to read for a beginner the pythonic list comprehension:
with open("assignment11-account-info.txt", "r") as file:
reader = csv.reader(file, delimiter=';')
fields = ('user', 'passwd', 'first', 'last', 'color','title','avatar')
result = [ dict(zip(fields, row)) for row in reader ]
Here's what I might do:
accounts = {}
with open("assignment11-account-info.txt", "r") as file:
for line in file:
fields = line.rstrip().split(";")
user = fields[0]
pass = fields[1]
first = fields[2]
last = fields[3]
color = fields[4]
title = fields[5]
avatar = fields[6]
accounts[user] = {
"user" : user,
"pass" : pass,
"first" : first,
"last" : last,
"color" : color,
"title" : title,
"avatar" : avatar
}
By using with, the file handle file is closed for you automatically. This is the most "Python"-ic way of doing things.
So long as user is unique, you won't overwrite any entries you put in as you read through the file assignment11-account-info.txt.
If you need to deal with a case where user is repeated in the file assignment11-account-info.txt, then you need to use an array or list ([...]) as opposed to a dictionary ({...}). This is because reusing the value of user will overwrite any previous user entry you add to accounts. Overwriting existing entries is almost always a bad thing when using dictionaries!
If that is the case, I might do the following:
accounts = {}
with open("assignment11-account-info.txt", "r") as file:
for line in file:
fields = line.rstrip().split(";")
user = fields[0]
pass = fields[1]
first = fields[2]
last = fields[3]
color = fields[4]
title = fields[5]
avatar = fields[6]
if user not in accounts:
accounts[user] = []
accounts[user].append({
"user" : user,
"pass" : pass,
"first" : first,
"last" : last,
"color" : color,
"title" : title,
"avatar" : avatar
})
In this way, you preserve any cases where user is duplicated.
I am stuck with setting up python and the library dedupe from dedupe.io to deduplicate a set of entries in a postgres database. The error is - "Records do not line up with data model" which should be easy to solve but I just do not get why I get this message.
What I have now (focused code and removed other functions)
# ## Setup
settings_file = 'lead_dedupe_settings'
training_file = 'lead_dedupe_training.json'
start_time = time.time()
...
def training():
# We'll be using variations on this following select statement to pull
# in campaign donor info.
#
# We did a fair amount of preprocessing of the fields in
""" Define Lead Query """
sql = "select id, phone, mobilephone, postalcode, email from dev_manuel.somedata"
# ## Training
if os.path.exists(settings_file):
print('reading from ', settings_file)
with open(settings_file, 'rb') as sf:
deduper = dedupe.StaticDedupe(sf, num_cores=4)
else:
# Define the fields dedupe will pay attention to
#
# The address, city, and zip fields are often missing, so we'll
# tell dedupe that, and we'll learn a model that take that into
# account
fields = [
{'field': 'id', 'type': 'ShortString'},
{'field': 'phone', 'type': 'String', 'has missing': True},
{'field': 'mobilephone', 'type': 'String', 'has missing': True},
{'field': 'postalcode', 'type': 'ShortString', 'has missing': True},
{'field': 'email', 'type': 'String', 'has missing': True}
]
# Create a new deduper object and pass our data model to it.
deduper = dedupe.Dedupe(fields, num_cores=4)
# connect to db and execute
conn = None
try:
# read the connection parameters
params = config()
# connect to the PostgreSQL server
conn = psycopg2.connect(**params)
print('Connecting to the PostgreSQL database...')
cur = conn.cursor()
# excute sql
cur.execute(sql)
temp_d = dict((i, row) for i, row in enumerate(cur))
print(temp_d)
deduper.sample(temp_d, 10000)
print('Done stage 1')
del temp_d
# close communication with the PostgreSQL database server
cur.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
finally:
if conn is not None:
conn.close()
print('Closed Connection')
# If we have training data saved from a previous run of dedupe,
# look for it an load it in.
#
# __Note:__ if you want to train from
# scratch, delete the training_file
if os.path.exists(training_file):
print('reading labeled examples from ', training_file)
with open(training_file) as tf:
deduper.readTraining(tf)
# ## Active learning
print('starting active labeling...')
# Starts the training loop. Dedupe will find the next pair of records
# it is least certain about and ask you to label them as duplicates
# or not.
# debug
print(deduper)
# vars(deduper)
# use 'y', 'n' and 'u' keys to flag duplicates
# press 'f' when you are finished
dedupe.convenience.consoleLabel(deduper)
# When finished, save our labeled, training pairs to disk
with open(training_file, 'w') as tf:
deduper.writeTraining(tf)
# Notice our argument here
#
# `recall` is the proportion of true dupes pairs that the learned
# rules must cover. You may want to reduce this if your are making
# too many blocks and too many comparisons.
deduper.train(recall=0.90)
with open(settings_file, 'wb') as sf:
deduper.writeSettings(sf)
# We can now remove some of the memory hobbing objects we used
# for training
deduper.cleanupTraining()
The error message is "Records do not line up with data model. The field 'id' is in data_model but not in a record". As you can see, I am defining 5 fields to be "learned". The query I am using returns me exactly these 5 columns with the data in it.
The output of
print(temp_d)
is
{0: ('00Q1o00000OjmQmEAJ', '+4955555555', None, '01561', None), 1: ('00Q1o00000JhgSUEAZ', None, '+4915555555', '27729', 'email#aemail.de')}
Which looks to me like valid input for the dedupe library.
What I tried
I checked if he already wrote a file as training set which would get
somehow read and be used, this is not the case (code would even say
it)
I tried debugging the "deduper" object where the definition of
the fields and such go in, I can see the fields definition
looking at other examples like csv or mysql which do pretty much the same I do.
Please point me in the direction where I am wrong.
It looks like the issue may be that your temp_d is a dictionary of tuples, as opposed to the expected input of a dictionary of dictionaries. I just started working with this package and found an example here which works for my purposes, which provides this function for setting up the dictionary albeit from a csv instead of the data pull you have in yours.
data_d = {}
with open(filename) as f:
reader = csv.DictReader(f)
for row in reader:
clean_row = [(k, preProcess(v)) for (k, v) in row.items()]
row_id = int(row['Id'])
data_d[row_id] = dict(clean_row)
return data_d
I have created a function that fetches price, rating, etc after it hits an API:
def is_priced(business_id):
try:
priced_ind = get_business(API_KEY, business_id)
priced_ind1 = priced_ind['price']
except:
priced_ind1 = 'None'
return priced_ind1
priced_ind = is_priced(b_id)
print(priced_ind)
Similar for rating
def is_rated(business_id):
try:
rated_ind = get_business(API_KEY, business_id)
rated_ind1 = rated_ind['rating']
except:
rated_ind1 = 'None'
return rated_ind1
However, I want my function to loop through the business names I have in my CSV file and catch all this data and export it to a new csv file with these two parameters beside the names of the business.
The CSV file has info on the name of the business along with its address,city,state,zip and country
Eg:
Name address city state zip country
XYZ(The) 5* WE 223899th St. New York NY 19921 US
My output:
Querying https://api.xyz.com/v3/businesses/matches ...
True
Querying https://api.xyz.com/v3/businesses/matches ...
4.0
Querying https://api.xyz.com/v3/businesses/matches ...
$$
Querying https://api.xyz.com/v3/businesses/matches ...
Querying https://api.xyz.com/v3/businesses/matches ...
The real issue is my output only returns business id in the csv. and the rating etc as u see is just returned in the console. how do I set a loop such that it returns for all the businesses the info i desire into a single CSV?
The csv module is useful for this sort of thing e.g.
import csv
with open('f.csv', 'r') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
with open('tmp.csv', 'w') as output:
writer = csv.writer(output)
for row in reader:
business_id = row[0]
row.append(get_price_index(business_id))
row.append(get_rate_index(business_id))
writer.writerow(row)
You can read the business names from the CSV file, iterate over them using a for loop, hit the API and store the results, and write to a new CSV file.
import csv
data = []
with open('businesses.csv') as fp:
# skip header line
header = next(fp)
reader = csv.reader(fp)
for row in reader:
b_name = reader[0]
# not sure how you get the business ID:
b_id = get_business_id(b_name)
p = is_priced(b_id)
r = is_rated(b_id)
out.append((b_name, p, r))
# write out the results
with open('business_data.csv', 'w') as fp:
writer = csv.writer(fp)
writer.writerow(['name', 'price', 'rating'])
for row in data:
writer.writerow(row)
You can do this easily using pandas:
import pandas as pd
csv = pd.read_csv('your_csv.csv', usecols=['business_name']) # since you only need the name
# you'll receive business_name in your functions
csv = csv.apply(is_priced, axis=1)
csv = csv.apply(is_rated, axis=1)
csv.to_csv('result.csv', index=False)
All you have to do in your functions is:
def is_priced(row):
business_name = row['business_name']
business_id = ??
...
So I am collecting data and this data is saved into csv files, however for presentation purposes I want to reorder the columns in each respective csv file based on it's related "order".
I was using this question (write CSV columns out in a different order in Python) as a guide but I'm not sure why I'm getting the error
writeindices = [name2index[name] for name in writenames]
KeyError: % Processor Time
when I run it. Note this error doesn't seem to be limited to just the string % Processor Time'.
Where am I going wrong?
Here is my code:
CPU_order=["%"+" Processor Time", "%"+" User Time", "Other"]
Memory_order=["Available Bytes", "Pages/sec", "Pages Output/sec", "Pages Input/sec", "Page Faults/sec"]
def reorder_csv(path,title,input_file):
if title == 'CPU':
order=CPU_order
elif title == 'Memory':
order=Memory_order
output_file=path+'/'+title+'_reorder'+'.csv'
writenames = order
reader = csv.reader(input_file)
writer = csv.writer(open(output_file, 'wb'))
readnames = reader.next()
name2index = dict((name, index) for index, name in enumerate(readnames))
writeindices = [name2index[name] for name in writenames]
reorderfunc = operator.itemgetter(*writeindices)
writer.writerow(writenames)
for row in reader:
writer.writerow(reorderfunc(row))
Here is a sample of what the input CSV file looks like:
,CPU\% User Time,CPU\% Processor Time,CPU\Other
05/23/2016 06:01:51.552,0,0,0
05/23/2016 06:02:01.567,0.038940741537158409,0.62259056657940626,0.077882481554869071
05/23/2016 06:02:11.566,0.03900149141703179,0.77956981074955856,0
05/23/2016 06:02:21.566,0,0,0
05/23/2016 06:02:31.566,0,1.1695867249963632,0
Your code works. It is your data which does not have a column named "% Processor Time". Here is a sample data I use:
Other,% User Time,% Processor Time
o1,u1,p1
o2,u2,p2
And here is the code which I call:
reorder_csv('.', 'CPU', open('data.csv'))
With these settings, everything works fine. Please check your data.
Update
Now that I see your data, it looks like your have column names such as "CPU\% Processor Time" and want to translate it to "% Processor Time" before writing out. All you need to do is creating your name2index this way:
name2index = dict((name.replace('CPU\\', ''), index) for index, name in enumerate(readnames))
The difference here is instead of name, you should have name.replace('CPU\\', ''), which get rid of the CPU\ part.
Update 2
I reworked your code to use csv.DictReader and csv.DictWriter. I also assume that "CPU\% Prvileged Time" will be transformed into "Other". If that is not the case, you can fix it in the transformer dictionary.
import csv
import os
def rename_columns(row):
""" Take a row (dictionary) of data and return a new row with columns renamed """
transformer = {
'CPU\\% User Time': '% User Time',
'CPU\\% Processor Time': '% Processor Time',
'CPU\\% Privileged Time': 'Other',
}
new_row = {transformer.get(k, k): v for k, v in row.items()}
return new_row
def reorder_csv(path, title, input_file):
header = dict(
CPU=["% Processor Time", "% User Time", "Other"],
Memory=["Available Bytes", "Pages/sec", "Pages Output/sec", "Pages Input/sec", "Page Faults/sec"],
)
reader = csv.DictReader(input_file)
output_filename = os.path.join(path, '{}_reorder2.csv'.format(title))
with open(output_filename, 'wb') as outfile:
# Create a new writer where each row is a dictionary.
# If the row contains extra keys, ignore them
writer = csv.DictWriter(outfile, header[title], extrasaction='ignore')
writer.writeheader()
for row in reader:
# Each row is a dictionary, not list
print row
row = rename_columns(row)
print row
print
writer.writerow(row)
I want to create a csvfile that has multiple users and at the same time create email addresses for this users using their last names. I am using python for this but I can't get it to create the e-mail address in the list. My script is below, what am I missing?
import csv
First_Name = ["Test"]
Last_Name = ["User%d" % i for i in range (1,10)]
Email_Address = 'Last_Name' [("#myemail.com")]
Password = ["Password1"]
# open a file for writing.
csv_out = open('mycsv.csv', 'wb')
# create the csv writer object.
mywriter = csv.writer(csv_out)
# all rows at once.
rows =zip(Email_Address, Password, First_Name, Last_Name,)
mywriter.writerows(rows)
csv_out.close()
Make
Email_Address = 'Last_Name' [("#myemail.com")]
into
Email_Address = [x + "#myemail.com" for x in Last_Name]
to create a list of all email addresses based on all last names. This assumes you intended for all of your variables to be lists.
Even though this will create ten emails (one for each last name) your file will only have one row written to it. This is because zip will stop iteration at the length of the shortest list you pass it. Currently First_Name and Password each contain only one item.
I'm basically guessing since you haven't said anything about what errors you're getting, but the most obvious problem I can see is that you're trying to add a string to a list of tuples, which doesn't make a lot of sense.
'Last_Name' [("#myemail.com")]
should be:
'Last_Name' + "#myemail.com"
Now, as far as what you're actually trying to do, which is extremely unclear, I think you want to use a series of list comprehensions. For example:
users = [i for i in range(0, 10)]
first_names = ["test"+str(user) for user in users]
last_names = ["User%d" %user for user in users]
email_addresses = [last_name + "#myemail.com" for last_name in last_names]
passwords = ["Password1" for user in users]
with open('mycsv.csv', 'wb') as csv_out:
writer = csv.writer(csv_out)
writer.writerows(zip(email_addresses, passwords, first_names, last_names))
output:
User0#myemail.com,Password1,test0,User0
User1#myemail.com,Password1,test1,User1
User2#myemail.com,Password1,test2,User2
User3#myemail.com,Password1,test3,User3
User4#myemail.com,Password1,test4,User4
User5#myemail.com,Password1,test5,User5
User6#myemail.com,Password1,test6,User6
User7#myemail.com,Password1,test7,User7
User8#myemail.com,Password1,test8,User8
User9#myemail.com,Password1,test9,User9
Your zip() will only produce a list w/ 1 item b/c First_Name and Password explicitly each contain only 1 item.
How about this, avoiding the zip entirely:
with open('mycsv.csv', 'wb') as csv_out:
writer = csv.writer(csv_out)
for i in xrange(1,9):
writer.writerow( ["User%d#myemail.com"%i, "Password%d"%i, "test%d"%i, "User%d"%i] )