I have a Python 3.7.3 script that reads an XML, parses what I need and is supposed to export the results to CSV. I had to go deeper in the XML tree using a for in loop for one of the fields, which throws off how the other for in statements append to csv.
When running the below, my output file does not list the different V-ID's (refer to the third for child in root... statement), however all the other fields are correct. The V-ID's display correctly when i remove the last for in statement and move the firstFile.write statement 2 tabs to the left, but then I don't have the status, so I know the problem is in the last statement. BTW, if I move the firstFile.write statement all the way to the left, it only returns one row in the csv, but there should be 5.
Is there a way to create a list from the output and then combine them all, or perhaps move the firstFile.write statement two tabs to the left and append the last for in statement to a specific column (essentially breaking up the firstFile.write statement)? Or do you have any other suggestions?
import os
import sys
import glob
import xml.etree.ElementTree as ET
firstFile = open("myfile.csv", "a")
firstFile.write("V-ID,")
firstFile.write("HostName,")
firstFile.write("Status,")
firstFile.write("Comments,")
firstFile.write("Finding Details,")
firstFile.write("STIG Name,")
basePath = os.path.dirname(os.path.realpath(__file__))
xmlFile = os.path.join(basePath, "C:\\Users\\myUserName\\Desktop\\Scripts\\Python\\XMLtest.xml")
tree = ET.parse(xmlFile)
root = tree.getroot()
for child in root.findall('{http://checklists.nist.gov/xccdf/1.2}title'):
d = child.text
for child in root:
for children in child.findall('{http://checklists.nist.gov/xccdf/1.2}target'):
b = children.text
for child in root.findall('{http://checklists.nist.gov/xccdf/1.2}Group'):
x = (str(child.attrib))
x = (x.split('_')[6])
a = x[:-2]
for child in root:
for children in child:
for childrens in children.findall('{http://checklists.nist.gov/xccdf/1.2}result'):
x = childrens.text
if ('pass' in x):
c = 'Completed'
else:
c = 'Ongoing'
firstFile.write("\n" + a + ',' + b + ',' + c + ',' + ',' + ',' + d)
firstFile.close()
Finally, took about a week to figure this out. I the output to CSV, then read it back into a list for each column, parsed the spaces and wrote it out again. Below is how I did it.
import os
import sys
import glob
import csv
import xml.etree.ElementTree as ET
firstFile = open("myfile.csv", "a")
path = 'C:\\Users\\JT\\Desktop\\Scripts\\Python\\xccdf\\'
for fileName in glob.glob(os.path.join(path, '*.xml')):
with open('C:\\Users\\JT\\Desktop\\Scripts\\Python\\myfile1.csv', 'w', newline='') as csvFile1:
csvWriter = csv.writer(csvFile1, delimiter=',')
# do your stuff
tree = ET.parse(fileName)
root = tree.getroot()
# Stig Title
for child in root.findall('{http://checklists.nist.gov/xccdf/1.2}title'):
d = child.text
# hostName
for child in root:
for children in child.findall('{http://checklists.nist.gov/xccdf/1.2}target'):
b = children.text
# V-ID
for child in root.findall('{http://checklists.nist.gov/xccdf/1.2}Group'):
x = (str(child.attrib))
x = (x.split('_')[6])
a = x[:-2]
firstFile.write(a + '\n')
# Status
for child in root:
for children in child:
for childrens in children.findall('{http://checklists.nist.gov/xccdf/1.2}result'):
x = childrens.text
firstFile.write(',' + b + ',' + x + ',' + ',' + ',' + d + '\n')
with open('C:\\Users\\JT\\Desktop\\Scripts\\Python\\myfile.csv', 'r') as csvFile:
csvReader = csv.reader(csvFile, delimiter=',')
vIDs = []
hostNames = []
status = []
stigTitles = []
for line in csvReader:
vID = line[0]
vIDs.append(vID)
try:
hostName = line[1]
hostNames.append(hostName)
except:
pass
try:
state = line[2]
status.append(state)
except:
pass
try:
stigTitle = line[5]
stigTitles.append(stigTitle)
except:
pass
with open('C:\\Users\\JT\\Desktop\\Scripts\\Python\\myfile1.csv', 'a', newline='') as csvFile1:
csvWriter = csv.writer(csvFile1, delimiter=',')
vIDMod = list(filter(None, vIDs))
hostNameMod = list(filter(None, hostNames))
statusMod = list(filter(None, status))
stigTitlesMod = list(filter(None, stigTitles))
csvWriter.writerows(zip(vIDMod, hostNameMod, statusMod, stigTitlesMod))
firstFile.close()
Related
I am new to Python Programming getting
ShotCode = root.attrib['Stops']
KeyError: 'Stops'
Error
tree = ET.parse(os.path.join(folderpath, xmlfilename))
root = tree.getroot()
filename, _ = xmlfilename.rsplit('.', 1)
Shot_30AA = open(filename + '.csv', 'w', newline='')
csvwriter = csv.writer(Shot_30AA)
head = []
ShotCode = root.attrib['Stops']
csvwriter.writerow(['Stops', ShotCode])
head.append(ShotCode)
sample Xml:
<Stops> <cmp_name>N/A</cmp_name> <cmp_id>N/A</cmp_id> <pu_DepartureDate>N/A</pu_DepartureDate> <DeliveryName>ABC</DeliveryName> <DeliveryID>RRFF</DeliveryID> <del_DepartureDate>2021-07-26T16:01:24.647</del_DepartureDate> <WorkOrder/> <ReleaseNo>EFFC</ReleaseNo> <NetWeight>38160.00</NetWeight> <TareWeight>0</TareWeight> <baletype>OCC-BALE</baletype> <BaleCount>36.00</BaleCount> <BaleCountLBH>0</BaleCountLBH> <SupplierName>VFGP</SupplierName> <DriverCode>DERG</DriverCode> <Coments>18971852</Coments> <TruckId>18971852</TruckId> <Picture/> </Stops>
i think you are confusing attributes and tags:
Xml attributes example:
<Stops test="attrib1"> </Stops>
In your case the element root has no attribs :
print(root.attrib) : {}
print(root.tag) : Stops
You may think of this one as another redundant question asked, but I tried to go through all similar questions asked, no luck so far. In my specific use-case, I can't use pandas or any other similar library for this operation.
This is what my input looks like
AttributeName,Value
Name,John
Gender,M
PlaceofBirth,Texas
Name,Alexa
Gender,F
SurName,Garden
This is my expected output
Name,Gender,Surname,PlaceofBirth
John,M,,Texas
Alexa,F,Garden,
So far, I have tried to store my input into a dictionary and then tried writing it to a csv string. But, it is failing as I am not sure how to incorporate missing column values conditions. Here is my code so far
reader = csv.reader(csvstring.split('\n'), delimiter=',')
csvdata = {}
csvfile = ''
for row in reader:
if row[0] != '' and row[0] in csvdata and row[1] != '':
csvdata[row[0]].append(row[1])
elif row[0] != '' and row[0] in csvdata and row[1] == '':
csvdata[row[0]].append(' ')
elif row[0] != '' and row[1] != '':
csvdata[row[0]] = [row[1]]
elif row[0] != '' and row[1] == '':
csvdata[row[0]] = [' ']
for key, value in csvdata.items():
if value == ' ':
csvdata[key] = []
csvfile += ','.join(csvdata.keys()) + '\n'
for row in zip(*csvdata.values()):
csvfile += ','.join(row) + '\n'
For the above code as well, I took some help here. Thanks in advance for any suggestions/advice.
Edit #1 : Update code to imply that I am doing processing on a csv string instead of a csv file.
What you need is something like that:
import csv
with open("in.csv") as infile:
buffer = []
item = {}
lines = csv.reader(infile)
for line in lines:
if line[0] == 'Name':
buffer.append(item.copy())
item = {'Name':line[1]}
else:
item[line[0]] = line[1]
buffer.append(item.copy())
for item in buffer[1:]:
print item
If none of the attributes is mandatory, I think #framontb solution needs to be rearranged in order to work also when Name field is not given.
This is an import-free solution, and it's not super elegant.
I assume you have lines already in this form, with this columns:
lines = [
"Name,John",
"Gender,M",
"PlaceofBirth,Texas",
"Gender,F",
"Name,Alexa",
"Surname,Garden" # modified typo here: SurName -> Surname
]
cols = ["Name", "Gender", "Surname", "PlaceofBirth"]
We need to distinguish one record from another, and without mandatory fields the best I can do is start considering a new record when an attribute has already been seen.
To do this, I use a temporary list of attributes tempcols from which I remove elements until an error is raised, i.e. new record.
Code:
csvdata = {k:[] for k in cols}
tempcols = list(cols)
for line in lines:
attr, value = line.split(",")
try:
csvdata[attr].append(value)
tempcols.remove(attr)
except ValueError:
for c in tempcols: # now tempcols has only "missing" attributes
csvdata[c].append("")
tempcols = [c for c in cols if c != attr]
for c in tempcols:
csvdata[c].append("")
# write csv string with the code you provided
csvfile = ""
csvfile += ",".join(csvdata.keys()) + "\n"
for row in zip(*csvdata.values()):
csvfile += ",".join(row) + "\n"
>>> print(csvfile)
Name,PlaceofBirth,Surname,Gender
John,Texas,,M
Alexa,,Garden,F
While, if you want to sort columns according to your desired output:
csvfile = ""
csvfile += ",".join(cols) + "\n"
for row in zip(*[csvdata[k] for k in cols]):
csvfile += ",".join(row) + "\n"
>>> print(csvfile)
Name,Gender,Surname,PlaceofBirth
John,M,,Texas
Alexa,F,Garden,
This works for me:
with open("in.csv") as infile, open("out.csv", "w") as outfile:
incsv, outcsv = csv.reader(infile), csv.writer(outfile)
incsv.__next__() # Skip 1st row
outcsv.writerows(zip(*incsv))
Update: For input and output as strings:
import csv, io
with io.StringIO(indata) as infile, io.StringIO() as outfile:
incsv, outcsv = csv.reader(infile), csv.writer(outfile)
incsv.__next__() # Skip 1st row
outcsv.writerows(zip(*incsv))
print(outfile.getvalue())
I get files that have NTFS audit permissions and I'm using Python to parse them. The raw CSV files list the path and then which groups have which access, such as this type of pattern:
E:\DIR A, CREATOR OWNER FullControl
E:\DIR A, Sales FullControl
E:\DIR A, HR Full Control
E:\DIR A\SUBDIR, Sales FullControl
E:\DIR A\SUBDIR, HR FullControl
My code parses the file to output this:
File Access for: E:\DIR A
CREATOR OWNER,FullControl
Sales,FullControl
HR,FullControl
File Access For: E:\DIR A\SUBDIR
Sales,FullControl
HR,FullControl
I'm new to generators but I'd like to use them to optimize my code. Nothing I've tried seems to work, so here is the original code (I know it's ugly). It works but it's very slow. The only way I can do this is by parsing out the paths first, put them in a list, make a set so that they're unique, then iterate over that list and match them with the path in the second list, and list all of the items it finds. Like I said, it's ugly but works.
import os, codecs, sys
reload(sys)
sys.setdefaultencoding('utf8') // to prevent cp-932 errors on screen
file = "aud.csv"
outfile = "access-2.csv"
filelist = []
accesslist = []
with codecs.open(file,"r",'utf-8-sig') as infile:
for line in infile:
newline = line.split(',')
folder = newline[0].replace("\"","")
user = newline[1].replace("\"","")
filelist.append(folder)
accesslist.append(folder+","+user)
newfl = sorted(set(filelist))
def makeFile():
print "Starting, please wait"
for i in range(1,len(newfl)):
searchItem = str(newfl[i])
with codecs.open(outfile,"a",'utf-8-sig') as output:
outtext = ("\r\nFile access for: "+ searchItem + "\r\n")
output.write(outtext)
for item in accesslist:
searchBreak = item.split(",")
searchTarg = searchBreak[0]
if searchItem == searchTarg:
searchBreaknew = searchBreak[1].replace("FSA-INC01S\\","")
searchBreaknew = str(searchBreaknew)
# print(searchBreaknew)
searchBreaknew = searchBreaknew.replace(" ",",")
searchBreaknew = searchBreaknew.replace("CREATOR,OWNER","CREATOR OWNER")
output.write(searchBreaknew)
How should I optimize this?
EDIT:
Here is an edited version. It works MUCH faster, though I'm sure it can still be fixed:
import os, codecs, sys, csv
reload(sys)
sys.setdefaultencoding('utf8')
file = "aud.csv"
outfile = "access-3.csv"
filelist = []
accesslist = []
with codecs.open(file,"r",'utf-8-sig') as csvinfile:
auditfile = csv.reader(csvinfile, delimiter=",")
for line in auditfile:
folder = line[0]
user = line[1].replace("FSA-INC01S\\","")
filelist.append(folder)
accesslist.append(folder+","+user)
newfl = sorted(set(filelist))
def makeFile():
print "Starting, please wait"
for i in xrange(1,len(newfl)):
searchItem = str(newfl[i])
outtext = ("\r\nFile access for: "+ searchItem + "\r\n")
accessUserlist = ""
for item in accesslist:
searchBreak = item.split(",")
if searchItem == searchBreak[0]:
searchBreaknew = str(searchBreak[1]).replace(" ",",")
searchBreaknew = searchBreaknew.replace("R,O","R O")
accessUserlist += searchBreaknew+"\r\n"
with codecs.open(outfile,"a",'utf-8-sig') as output:
output.write(outtext)
output.write(accessUserlist)
I'm misguided from your used .csv file extension.
Your given expected output isn't compatible with csv, as inside a record no \n possible.
Proposal using a generator returning record by record:
class Audit(object):
def __init__(self, fieldnames):
self.fieldnames = fieldnames
self.__access = {}
def append(self, row):
folder = row[self.fieldnames[0]]
access = row[self.fieldnames[1]].strip(' ')
access = access.replace("FSA-INC01S\\", "")
access = access.split(' ')
if len(access) == 3:
if access[0] == 'CREATOR':
access[0] += ' ' + access[1]
del access[1];
elif access[1] == 'Full':
access[1] += ' ' + access[2]
del access[2];
if folder not in self.__access:
self.__access[folder] = []
self.__access[folder].append(access)
# Generator for class Audit
def __iter__(self):
record = ''
for folder in sorted(self.__access):
record = folder + '\n'
for access in self.__access[folder]:
record += '%s\n' % (','.join(access) )
yield record + '\n'
How to use it:
def main():
import io, csv
audit = Audit(['Folder', 'Accesslist'])
with io.open(file, "r", encoding='utf-8') as csc_in:
for row in csv.DictReader(csc_in, delimiter=","):
audit.append(row)
with io.open(outfile, 'w', newline='', encoding='utf-8') as txt_out:
for record in audit:
txt_out.write(record)
Tested with Python:3.4.2 - csv:1.0
I am counting the number of contractions in a certain set of presidential speeches, and want to output these contractions to a CSV or text file. Here's my code:
import urllib2,sys,os,csv
from bs4 import BeautifulSoup,NavigableString
from string import punctuation as p
from multiprocessing import Pool
import re, nltk
import requests
import math, functools
import summarize
reload(sys)
def processURL_short(l):
open_url = urllib2.urlopen(l).read()
item_soup = BeautifulSoup(open_url)
item_div = item_soup.find('div',{'id':'transcript'},{'class':'displaytext'})
item_str = item_div.text.lower()
return item_str
every_link_test = ['http://www.millercenter.org/president/obama/speeches/speech-4427',
'http://www.millercenter.org/president/obama/speeches/speech-4424',
'http://www.millercenter.org/president/obama/speeches/speech-4453',
'http://www.millercenter.org/president/obama/speeches/speech-4612',
'http://www.millercenter.org/president/obama/speeches/speech-5502']
data = {}
count = 0
for l in every_link_test:
content_1 = processURL_short(l)
for word in content_1.split():
word = word.strip(p)
if word in contractions:
count = count + 1
splitlink = l.split("/")
president = splitlink[4]
speech_num = splitlink[-1]
filename = "{0}_{1}".format(president,speech_num)
data[filename] = count
print count, filename
with open('contraction_counts.csv','w',newline='') as fp:
a = csv.writer(fp,delimiter = ',')
a.writerows(data)
Running that for loop prints out
79 obama_speech-4427
101 obama_speech-4424
101 obama_speech-4453
182 obama_speech-4612
224 obama_speech-5502
I want to export that to a text file, where the numbers on the left are one column, and the president/speech number are in the second column. My with statement just writes each individual row to a separate file, which is definitely suboptimal.
You can try something like this, this is a generic method, modify as you see fit
import csv
with open('somepath/file.txt', 'wb+') as outfile:
w = csv.writer(outfile)
w.writerow(['header1', 'header2'])
for i in you_data_structure: # eg list or dictionary i'm assuming a list structure
w.writerow([
i[0],
i[1],
])
or if a dictionary
import csv
with open('somepath/file.txt', 'wb+') as outfile:
w = csv.writer(outfile)
w.writerow(['header1', 'header2'])
for k, v in your_dictionary.items(): # eg list or dictionary i'm assuming a list structure
w.writerow([
k,
v,
])
Your problem is that you open the output file inside the loop in w mode, meaning that it is erased on each iteration. You can easily solve it in 2 ways:
mode the open outside of the loop (normal way). You will open the file only once, add a line on each iteration and close it when exiting the with block:
with open('contraction_counts.csv','w',newline='') as fp:
a = csv.writer(fp,delimiter = ',')
for l in every_link_test:
content_1 = processURL_short(l)
for word in content_1.split():
word = word.strip(p)
if word in contractions:
count = count + 1
splitlink = l.split("/")
president = splitlink[4]
speech_num = splitlink[-1]
filename = "{0}_{1}".format(president,speech_num)
data[filename] = count
print count, filename
a.writerows(data)
open the file in a (append) mode. On each iteration you reopen the file and write at the end instead of erasing it - this way uses more IO resources because of the open/close, and should be used only if the program can break and you want to be sure that all that was written before the crash has actually been saved to disk
for l in every_link_test:
content_1 = processURL_short(l)
for word in content_1.split():
word = word.strip(p)
if word in contractions:
count = count + 1
splitlink = l.split("/")
president = splitlink[4]
speech_num = splitlink[-1]
filename = "{0}_{1}".format(president,speech_num)
data[filename] = count
print count, filename
with open('contraction_counts.csv','a',newline='') as fp:
a = csv.writer(fp,delimiter = ',')
a.writerows(data)
I am pretty new to python and beautiful soup. this is my first 'real' project. I am trying to scrape some info from a website. So far I have been semi-successful. I have identified the table and got python to print out the relevant information pretty nicely.
I am stuck with writing that information python prints to a usable csv file.
here is what I have for my code. to get python to print the info I need.
for row in table_1.find_all('tr'):
tds = row.find_all('td')
try:
a = str(tds[0].get_text())
b = str(tds[1].get_text())
c = str(tds[2].get_text())
d = str(tds[3].get_text())
e = str(tds[4].get_text())
f = str(tds[5].get_text())
g = str(tds[7].get_text())
print 'User Name:' + a
print 'Source:' + b
print 'Staff:' + c
print 'Location:' + d
print 'Attended On:' + e
print 'Used:' + f
print 'Date:' + g + '\n'
except:
print 'bad string'
continue
Here is a more succinct way to collect your data:
columns = ["User Name", "Source", "Staff", "Location", "Attended On", "Used", "Date"]
table = []
for row in table_1.find_all('tr'):
tds = row.find_all('td')
try:
data = [td.get_text() for td in tds]
for field,value in zip(columns, data):
print("{}: {}".format(field, value))
table.append(data)
except:
print("Bad string value")
and you can then write to csv as
import csv
with open("myfile.csv", "wb") as outf: # Python 2.x
# with open("myfile.csv", "w", newline="") as outf: # Python 3.x
outcsv = csv.writer(outf)
# header row
outcsv.writerow(columns)
# data
outcsv.writerows(table)
You could append a thru g to a list within a list for each iteration of the loop. Then use this:
my_list = []
for row in table_1.find_all('tr'):
tds = row.find_all('td')
a = str(tds[0].get_text())
b = str(tds[1].get_text())
c = str(tds[2].get_text())
d = str(tds[3].get_text())
e = str(tds[4].get_text())
f = str(tds[5].get_text())
g = str(tds[7].get_text())
my_list.append([a,b,c,d,e,f,g])
Then:
import csv
with open('output_table.csv', 'wb') as csvfile:
wr= csv.writer(csvfile,lineterminator = '\n')
wr.writerows(my_list)