I need to split lines in to variables.
Here is an example of 2 lines:
port11.annex1.naples.net [30:00:00:03] "GET /logos/small_gopher.gif HTTP/1.0" 200 935
port11.annex1.naples.net [30:00:00:03] "GET /icons/book.gif" 200 935
However, as you can see sometimes a line is missing one piece.
How can I split this without errors?
Currently I am using:
for x in log.readlines():
data = x.split(" ")
hostname = data[0]
time = data[1]
command = data[2]
resource = data[3]
version = data[4]
status = data[5]
size = data[6]
This gives errors, because not every line has 7 "items"
Maybe I should use multiple delimiters to split, however I can't find a good way that works...
Why are you not doing it like this? Suppose your log string is this one:
log = r'port11.annex1.naples.net [30:00:00:03] "GET /icons/book.gif" 200 935'
data = log.split(" ")
for i in data:
print i
This way you won't have to give the index and will be able to remove hard-coding.
You could use a regex to match the different components of the log. Then you'll ned to check whether the request part consists of command, resource and version or only command and resource. Something like this could work:
import re
# open your log file here...
logmatcher = re.compile("([^ ]+) (\[[:0-9]+\]) (\"[^\"]+\") ([0-9]+) ([0-9]+)")
for x in log.readlines():
res = logmatcher.findall(x)
if len(res) > 0:
hostname = res[0][0]
time = res[0][1]
req = res[0][2][1:-1].split(" ") #[1:-1] to get rid of the ""
if len(req) > 2: # check if request contains the http version
command = req[0]
resource = req[1]
version = req[2]
else:
command = req[0]
resource = req[1]
version = "" # there's no version in the request. just use ""
status = res[0][3]
size = res[0][4]
Related
i am using ConfigParser to write some modification in a configuration file, basically what i am doing is :
retrieve my urls from an api
write them in my config file
but after the edit, i noticed that the file format has changed :
Before the edit :
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
[[inputs.cpu]]
percpu = true
totalcpu = true
[[inputs.prometheus]]
urls= []
interval = "140s"
[inputs.prometheus.tags]
exp = "exp"
After the edit :
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
[[inputs.cpu]
percpu = true
totalcpu = true
[[inputs.prometheus]
interval = "140s"
response_timeout = "120s"
[inputs.prometheus.tags]
exp = "snmp"
the offset changed and all the comments that were in the file has been deleted, my code :
edit = configparser.ConfigParser(strict=False, allow_no_value=True, empty_lines_in_values=False)
edit.read("file.conf")
edit.set("[section]", "urls", str(urls))
print(edit)
# Write changes back to file
with open('file.conf', 'w') as configfile:
edit.write(configfile)
I have already tried : SafeConfigParser, RawConfigParser but it doesn't work.
when i do a print(edit.section()), here is what i get : ['global_tags', 'agent', '[inputs.cpu', , '[inputs.prometheus', 'inputs.prometheus.tags']
Is there any help please ?
Here's an example of a "filter" parser that retains all other formatting but changes the urls line in the agent section if it comes across it:
import io
def filter_config(stream, item_filter):
"""
Filter a "config" file stream.
:param stream: Text stream to read from.
:param item_filter: Filter function; takes a section and a line and returns a filtered line.
:return: Yields (possibly) filtered lines.
"""
current_section = None
for line in stream:
stripped_line = line.strip()
if stripped_line.startswith('['):
current_section = stripped_line.strip('[]')
elif not stripped_line.startswith("#") and " = " in stripped_line:
line = item_filter(current_section, line)
yield line
def urls_filter(section, line):
if section == "agent" and line.strip().startswith("urls = "):
start, sep, end = line.partition(" = ")
return start + sep + "hi there..."
return line
# Could be a disk file, just using `io.StringIO()` for self-containedness here
config_file = io.StringIO("""
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
# HELLO! THIS IS A COMMENT!
metric_buffer_limit = 100000
urls = ""
[other]
urls = can't touch this!!!
""")
for line in filter_config(config_file, urls_filter):
print(line, end="")
The output is
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
# HELLO! THIS IS A COMMENT!
metric_buffer_limit = 100000
urls = hi there...
[other]
urls = can't touch this!!!
so you can see all comments and (mis-)indentation was preserved.
The problem is that you're passing brackets with the section name, which is unnecessary:
edit.set("[section]", "urls", str(urls))
See this example from the documentation:
import configparser
config = configparser.RawConfigParser()
# Please note that using RawConfigParser's set functions, you can assign
# non-string values to keys internally, but will receive an error when
# attempting to write to a file or when you get it in non-raw mode. Setting
# values using the mapping protocol or ConfigParser's set() does not allow
# such assignments to take place.
config.add_section('Section1')
config.set('Section1', 'an_int', '15')
config.set('Section1', 'a_bool', 'true')
config.set('Section1', 'a_float', '3.1415')
config.set('Section1', 'baz', 'fun')
config.set('Section1', 'bar', 'Python')
config.set('Section1', 'foo', '%(bar)s is %(baz)s!')
# Writing our configuration file to 'example.cfg'
with open('example.cfg', 'w') as configfile:
config.write(configfile)
But, anyway, it won't preserve the identation, nor will it support nested sections; you could try the YAML format, which does allow to use indentation to separate nested sections, but it won't keep the exact same indentation when saving, but, do you really need it to be the exact same? Anyway, there are various configuration formats out there, you should study them to see what fits your case better.
I'm new here to StackOverflow, but I have found a LOT of answers on this site. I'm also a programming newbie, so i figured i'd join and finally become part of this community - starting with a question about a problem that's been plaguing me for hours.
I login to a website and scrape a big body of text within the b tag to be converted into a proper table. The layout of the resulting Output.txt looks like this:
BIN STATUS
8FHA9D8H 82HG9F RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
INVENTORY CODE: FPBC *SOUP CANS LENTILS
BIN STATUS
HA8DHW2H HD0138 RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU 00A123 #2956- INVALID STOCK COUPON CODE (MISSING).
93827548 096DBR RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
There are a bunch of pages with the exact same blocks, but i need them to be combined into an ACTUAL table that looks like this:
BIN INV CODE STATUS
HA8DHW2HHD0138 FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU00A123 FPBC-*SOUP CANS LENTILS #2956- INVALID STOCK COUPON CODE (MISSING).
93827548096DBR FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8FHA9D8H82HG9F SSXR-98-20LM NM CORN CREAM RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
Essentially, all separate text blocks in this example would become part of this table, with the inv code repeating with its Bin values. I would post my attempts at parsing this data(have tried Pandas/bs/openpyxl/csv writer), but ill admit they are a little embarrassing, as i cannot find any information on this specific problem. Is there any benevolent soul out there that can help me out? :)
(Also, i am using Python 2.7)
A simple custom parser like the following should do the trick.
from __future__ import print_function
def parse_body(s):
line_sep = '\n'
getting_bins = False
inv_code = ''
for l in s.split(line_sep):
if l.startswith('INVENTORY CODE:') and not getting_bins:
inv_data = l.split()
inv_code = inv_data[2] + '-' + ' '.join(inv_data[3:])
elif l.startswith('INVENTORY CODE:') and getting_bins:
print("unexpected inventory code while reading bins:", l)
elif l.startswith('BIN') and l.endswith('MESSAGE'):
getting_bins = True
elif getting_bins == True and l:
bin_data = l.split()
# need to add exception handling here to make sure:
# 1) we have an inv_code
# 2) bin_data is at least 3 items big (assuming two for
# bin_id and at least one for message)
# 3) maybe some constraint checking to ensure that we have
# a valid instance of an inventory code and bin id
bin_id = ''.join(bin_data[0:2])
message = ' '.join(bin_data[2:])
# we now have a bin, an inv_code, and a message to add to our table
print(bin_id.ljust(20), inv_code.ljust(30), message, sep='\t')
elif getting_bins == True and not l:
# done getting bins for current inventory code
getting_bins = False
inv_code = ''
A rather complex one, but this might get you started:
import re, pandas as pd
from pandas import DataFrame
rx = re.compile(r'''
(?:INVENTORY\ CODE:)\s*
(?P<inv>.+\S)
[\s\S]+?
^BIN.+[\n\r]
(?P<bin_msg>(?:(?!^\ ).+[\n\r])+)
''', re.MULTILINE | re.VERBOSE)
string = your_string_here
# set up the dataframe
df = DataFrame(columns = ['BIN', 'INV', 'MESSAGE'])
for match in rx.finditer(string):
inv = match.group('inv')
bin_msg_raw = match.group('bin_msg').split("\n")
rxbinmsg = re.compile(r'^(?P<bin>(?:(?!\ {2}).)+)\s+(?P<message>.+\S)\s*$', re.MULTILINE)
for item in bin_msg_raw:
for m in rxbinmsg.finditer(item):
# append it to the dataframe
df.loc[len(df.index)] = [m.group('bin'), inv, m.group('message')]
print(df)
Explanation
It looks for INVENTORY CODE and sets up the groups (inv and bin_msg) for further processing in afterwork() (note: it would be easier if you had only one line of bin/msg as you need to split the group here afterwards).
Afterwards, it splits the bin and msg part and appends all to the df object.
I had a code written for a website scrapping which may help you.
Basically what you need to do is write click on the web page go to html and try to find the tag for the table you are looking for and using the module (i am using beautiful soup) extract the information. I am creating a json as I need to store it into mongodb you can create table.
#! /usr/bin/python
import sys
import requests
import re
from BeautifulSoup import BeautifulSoup
import pymongo
def req_and_parsing():
url2 = 'http://businfo.dimts.in/businfo/Bus_info/EtaByRoute.aspx?ID='
list1 = ['534UP','534DOWN']
for Route in list1:
final_url = url2 + Route
#r = requests.get(final_url)
#parsing_file(r.text,Route)
outdict = []
outdict = [parsing_file( requests.get(url2+Route).text,Route) for Route in list1 ]
print outdict
conn = f_connection()
for i in range(len(outdict)):
insert_records(conn,outdict[i])
def parsing_file(txt,Route):
soup = BeautifulSoup(txt)
table = soup.findAll("table",{"id" : "ctl00_ContentPlaceHolder1_GridView2"})
#trtags = table[0].findAll('tr')
tdlist = []
trtddict = {}
"""
for trtag in trtags:
print 'print trtag- ' , trtag.text
tdtags = trtag.findAll('td')
for tdtag in tdtags:
print tdtag.text
"""
divtags = soup.findAll("span",{"id":"ctl00_ContentPlaceHolder1_ErrorLabel"})
for divtag in divtags:
for divtag in divtags:
print "div tag - " , divtag.text
if divtag.text == "Currently no bus is running on this route" or "This is not a cluster (orange bus) route":
print "Page not displayed Errored with below meeeage for Route-", Route," , " , divtag.text
sys.exit()
trtags = table[0].findAll('tr')
for trtag in trtags:
tdtags = trtag.findAll('td')
if len(tdtags) == 2:
trtddict[tdtags[0].text] = sub_colon(tdtags[1].text)
return trtddict
def sub_colon(tag_str):
return re.sub(';',',',tag_str)
def f_connection():
try:
conn=pymongo.MongoClient()
print "Connected successfully!!!"
except pymongo.errors.ConnectionFailure, e:
print "Could not connect to MongoDB: %s" % e
return conn
def insert_records(conn,stop_dict):
db = conn.test
print db.collection_names()
mycoll = db.stopsETA
mycoll.insert(stop_dict)
if __name__ == "__main__":
req_and_parsing()
EDIT:(SOLVED) When I am reading the values in from my file a newline char is getting added onto the end.(\n) this is splitting my request string at that point.
I think it's to do with how I saved the values to the file in the first place. Many thanks.
I have I have the following code:
results = 'http://www.myurl.com/'+str(mystring)
print str(results)
request = urllib2.Request(results)
request.add_header('User-Agent','Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)')
opener = urllib2.build_opener()
text = opener.open(request).read()
Which is in a loop.
after the loop has run a few times str(mystring) changes to give a different set of results.
I can loop the script as many times as I like keeping the value of str(mystring) constant but every time I change the value of str(mystring) I get an error saying no host given when the code tries to build the opener.
opener = urllib2.build_opener()
Can anyone help please?
TIA,
Paul.
EDIT:
More code here.....
import sys
import string
import httplib
import urllib2
import re
import random
import time
def StripTags(text):
finished = 0
while not finished:
finished = 1
start = text.find("<")
if start >= 0:
stop = text[start:].find(">")
if stop >= 0:
text = text[:start] + text[start+stop+1:]
finished = 0
return text
mystring="test"
d={}
with open("myfile","r") as f:
while True:
page_counter=0
print str(mystring)
try:
while page_counter <20:
results = 'http://www.myurl.com/'+str(mystring)
print str(results)
request = urllib2.Request(results)
request.add_header('User-Agent','Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)')
opener = urllib2.build_opener()
text = opener.open(request).read()
finds = (re.findall('([\w\.\-]+'+mystring+')',StripTags(text)))
for find in finds:
d[find]=1
uniq_emails=d.keys()
page_counter = page_counter +1
print "found this " +str(finds)"
random.seed()
n = random.random()
i = n * 5
print "Pausing script for " + str(i) + " Seconds" + ""
time.sleep(i)
mystring=next(f)
except IOError:
print "No result found!"+""
I found the answer. It's as follows....
The values for mystring were read in from a file.
In the script I wrote to write the file I opens it with "w" instead of "wb".
Each line in the file ended with a newline character "/n".
When mystring was added to the string request the new line was being created in the middle of the request string.[1]
This would never have been apparent from my code because I changed it to post here in an effort to hide the real url I am using to get my results.[2]
My actual url looks more like this.....
Myurl.com/mystring/otherstuff/page_counter/morestuff.htm
The /n being read from the file spliced my url and gave urllib problems......
[1] I use windows. It adds lots of unseen things to text files. If I'd opened the file to write to with "wb" instead of "w" the contents would have been written without the unseen /n
[2] always post your full code kids. The good people of stackoverflow can't help you unless they can see what you are doing.....
Many thanks all, I hope this helps someone out at some point.
Paul.
In the while loop, you're setting results to something which is not a url:
results = 'myurl+str(mystring)'
It should probably be results = myurl+str(mystring)
By the way, it appears there's no need for all the casting to string (str()) you do:
(expanded on request)
print str(foo): in such a case, str() is never necessary. Python will always print foo's string representation
results = 'http://www.myurl.com/'+str(mystring). This is also unnecessary; mystring is already a string, so 'http://www.myurl.com/' + mystring would suffice.
print "Pausing script for " + str(i) + " Seconds". Here you would get an error without str() since you can't do string + int. However, print "foo", 1, "bar" does work. As do print "foo %i bar" % 1 and print "foo {0} bar".format(1) (see here)
I want to write a program that sends an e-mail to one or more specified recipients when a certain event occurs. For this I need the user to write the parameters for the mail server into a config. Possible values are for example: serveradress, ports, ssl(true/false) and a list of desired recipients.
Whats the user-friendliest/best-practice way to do this?
I could of course use a python file with the correct parameters and the user has to fill it out, but I wouldn't consider this user friendly. I also read about the 'config' module in python, but it seems to me that it's made for creating config files on its own, and not to have users fill the files out themselves.
Are you saying that the fact that the config file would need to be valid Python makes it unfriendly? It seems like having lines in a file like:
server = 'mail.domain.com'
port = 25
...etc would be intuitive enough while still being valid Python. If you don't want the user to have to know that they have to quote strings, though, you might go the YAML route. I use YAML pretty much exclusively for config files and find it very intuitive, and it would also be intuitive for an end user I think (though it requires a third-party module - PyYAML):
server: mail.domain.com
port: 25
Having pyyaml load it is simple:
>>> import yaml
>>> yaml.load("""a: 1
... b: foo
... """)
{'a': 1, 'b': 'foo'}
With a file it's easy too.
>>> with open('myconfig.yaml', 'r') as cfile:
... config = yaml.load(cfile)
...
config now contains all of the parameters.
I doesn't matter technically proficient your users are; you can count on them to screw up editing a text file. (They'll save it in the wrong place. They'll use MS Word to edit a text file. They'll make typos.) I suggest making a gui that validates the input and creates the configuration file in the correct format and location. A simple gui created in Tkinter would probably fit your needs.
I've been using ConfigParser. It's designed to read .ini style files that have:
[section]
option = value
It's quite easy to use and the documentation is pretty easy to read. Basically you just load the whole file into a ConfigParser object:
import ConfigParser
config = ConfigParser.ConfigParser()
config.read('configfile.txt')
Then you can make sure the users haven't messed anything up by checking the options. I do so with a list:
OPTIONS =
['section,option,defaultvalue',
.
.
.
]
for opt in OPTIONS:
section,option,defaultval = opt.split(',')
if not config.has_option(section,option):
print "Missing option %s in section %s" % (option,section)
Getting the values out is easy too.
val = config.get('section','option')
And I also wrote a function that creates a sample config file using that OPTIONS list.
new_config = ConfigParser.ConfigParser()
for opt in OPTIONS:
section,option,defaultval = opt.split(',')
if not new_config.has_section(section):
new_config.add_section(section)
new_config.set(section, option, defaultval)
with open("sample_configfile.txt", 'wb') as newconfigfile:
new_config.write(newconfigfile)
print "Generated file: sample_configfile.txt"
What are the drawbacks of such a solution:
ch = 'serveradress = %s\nport = %s\nssl = %s'
a = raw_input("Enter the server's address : ")
b = 'a'
bla = "\nEnter the port : "
while not all(x.isdigit() for x in b):
b = raw_input(bla)
bla = "Take care: you must enter digits exclusively\n"\
+" Re-enter the port (digits only) : "
c = ''
bla = "\nChoose the ssl option (t or f) : "
while c not in ('t','f'):
c = raw_input(bla)
bla = "Take care: you must type f or t exclusively\n"\
+" Re-choose the ssl option : "
with open('configfile.txt','w') as f:
f.write(ch % (a,b,c))
.
PS
I've read in the jonesy's post that the value in a config file may have to be quoted. If so, and you want the user not to have to write him/her-self the quotes, you simply add
a = a.join('""')
b = b.join('""')
c = c.join('""')
.
EDIT
ch = 'serveradress = %s\nport = %s\nssl = %s'
d = {0:('',
"Enter the server's address : "),
1:("Take care: you must enter digits exclusively",
"Enter the port : "),
2:("Take care: you must type f or t exclusively",
"Choose the ssl option (t or f) : ") }
def func(i,x):
if x is None:
return False
if i==0:
return True
elif i==1:
try:
ess = int(x)
return True
except:
return False
elif i==2:
if x in ('t','f'):
return True
else:
return False
li = len(d)*[None]
L = range(len(d))
while True:
for n in sorted(L):
bla = d[n][1]
val = None
while not func(n,val):
val = raw_input(bla)
bla = '\n '.join(d[n])
li[n] = val.join('""')
decision = ''
disp = "\n====== If you choose to process, =============="\
+"\n the content of the file will be :\n\n" \
+ ch % tuple(li) \
+ "\n==============================================="\
+ "\n\nDo you want to process (type y) or to correct (type c) : "
while decision not in ('y','c'):
decision = raw_input(disp)
disp = "Do you want to process (type y) or to correct (type c) ? : "
if decision=='y':
break
else:
diag = False
while not diag:
vi = '\nWhat lines do you want to correct ?\n'\
+'\n'.join(str(j)+' - '+line for j,line in enumerate((ch % tuple(li)).splitlines()))\
+'\nType numbers of lines belonging to range(0,'+str(len(d))+') separated by spaces) :\n'
to_modify = raw_input(vi)
try:
diag = all(int(entry) in xrange(len(d)) for entry in to_modify.split())
L = [int(entry) for entry in to_modify.split()]
except:
diag = False
with open('configfile.txt','w') as f:
f.write(ch % tuple(li))
print '-*- Recording of the config file : done -*-'
I'm trying to get some results from UniProt, which is a protein database (details are not important). I'm trying to use some script that translates from one kind of ID to another. I was able to do this manually on the browser, but could not do it in Python.
In http://www.uniprot.org/faq/28 there are some sample scripts. I tried the Perl one and it seems to work, so the problem is my Python attempts. The (working) script is:
## tool_example.pl ##
use strict;
use warnings;
use LWP::UserAgent;
my $base = 'http://www.uniprot.org';
my $tool = 'mapping';
my $params = {
from => 'ACC', to => 'P_REFSEQ_AC', format => 'tab',
query => 'P13368 P20806 Q9UM73 P97793 Q17192'
};
my $agent = LWP::UserAgent->new;
push #{$agent->requests_redirectable}, 'POST';
print STDERR "Submitting...\n";
my $response = $agent->post("$base/$tool/", $params);
while (my $wait = $response->header('Retry-After')) {
print STDERR "Waiting ($wait)...\n";
sleep $wait;
print STDERR "Checking...\n";
$response = $agent->get($response->base);
}
$response->is_success ?
print $response->content :
die 'Failed, got ' . $response->status_line .
' for ' . $response->request->uri . "\n";
My questions are:
1) How would you do that in Python?
2) Will I be able to massively "scale" that (i.e., use a lot of entries in the query field)?
question #1:
This can be done using python's urllibs:
import urllib, urllib2
import time
import sys
query = ' '.join(sys.argv)
# encode params as a list of 2-tuples
params = ( ('from','ACC'), ('to', 'P_REFSEQ_AC'), ('format','tab'), ('query', query))
# url encode them
data = urllib.urlencode(params)
url = 'http://www.uniprot.org/mapping/'
# fetch the data
try:
foo = urllib2.urlopen(url, data)
except urllib2.HttpError, e:
if e.code == 503:
# blah blah get the value of the header...
wait_time = int(e.hdrs.get('Retry-after', 0))
print 'Sleeping %i seconds...' % (wait_time,)
time.sleep(wait_time)
foo = urllib2.urlopen(url, data)
# foo is a file-like object, do with it what you will.
foo.read()
You're probably better off using the Protein Identifier Cross Reference service from the EBI to convert one set of IDs to another. It has a very good REST interface.
http://www.ebi.ac.uk/Tools/picr/
I should also mention that UniProt has very good webservices available. Though if you are tied to using simple http requests for some reason then its probably not useful.
Let's assume that you are using Python 2.5.
We can use httplib to directly call the web site:
import httplib, urllib
querystring = {}
#Build the query string here from the following keys (query, format, columns, compress, limit, offset)
querystring["query"] = ""
querystring["format"] = "" # one of html | tab | fasta | gff | txt | xml | rdf | rss | list
querystring["columns"] = "" # the columns you want comma seperated
querystring["compress"] = "" # yes or no
## These may be optional
querystring["limit"] = "" # I guess if you only want a few rows
querystring["offset"] = "" # bring on paging
##From the examples - query=organism:9606+AND+antigen&format=xml&compress=no
##Delete the following and replace with your query
querystring = {}
querystring["query"] = "organism:9606 AND antigen"
querystring["format"] = "xml" #make it human readable
querystring["compress"] = "no" #I don't want to have to unzip
conn = httplib.HTTPConnection("www.uniprot.org")
conn.request("GET", "/uniprot/?"+ urllib.urlencode(querystring))
r1 = conn.getresponse()
if r1.status == 200:
data1 = r1.read()
print data1 #or do something with it
You could then make a function around creating the query string and you should be away.
check this out bioservices. they interface a lot of databases through Python.
https://pythonhosted.org/bioservices/_modules/bioservices/uniprot.html
conda install bioservices --yes
in complement to O.rka answer:
Question 1:
from bioservices import UniProt
u = UniProt()
res = u.get_df("P13368 P20806 Q9UM73 P97793 Q17192".split())
This returns a dataframe with all information about each entry.
Question 2: same answer. This should scale up.
Disclaimer: I'm the author of bioservices
There is a python package in pip which does exactly what you want
pip install uniprot-mapper