This seems like a simple task and I'm not sure if I've accomplished it already, or if I'm chasing my tail.
values = [value.replace('-','') for value in values] ## strips out hyphen (only 1)
print values ## outputs ['0160840020']
parcelID = str(values) ## convert to string
print parcelID ##outputs ['0160840020']
url = 'Detail.aspx?RE='+ parcelID ## outputs Detail.aspx?RE=['0160840020']
As you can see I'm trying to append the number attached to the end of the URL in order to change the page via a POST parameter. My question is how do I strip the [' prefix and '] suffix? I've already tried parcelID.strip("['") with no luck. Am I doing this correctly?
values is a list (of length 1), which is why it appears in brackets. If you want to get just the ID, do:
parcelID = values[0]
Instead of
parcelID = str(values)
Assuming you actually have a list of values when you perform this (and not just one item) this would solve you problem (it would also work for one item as you have shown)
values = [value.replace('-','') for value in values] ## strips out hyphen (only 1)
# create a list of urls from the parcelIDs
urls = ['Detail.aspx?RE='+ str(parcelID) for parcelID in values]
# use each url one at a time
for url in urls:
# do whatever you need to do with each URL
Related
I have the following set in python (its actually one set item):
product_set = {'Product, Product_Source_System, Product_Number'}
I want to add a static prefix (source.) to all the comma seperated values in the set, so I get the following output:
{'source.Product, source.Product_Source_System, source.Product_Number'}
I tried with a set comprehension, but it doesn't do the trick or I'm doing something wrong. It only prefixes the first value in the set.
{"source." + x for x in set}
I know sets are immutable. I don't need a new set, just output the new values.
Anyone that can help?
Thanks in advance
Edit: Splitting the initial long string into a list of short strings and then (only if required) making a set out of the list:
s1 = set('Product, Product_Source_System, Product_Number'.split(', '))
Constructing a new set:
s1 = {'Product', 'Product_Source_System', 'Product_Number'}
s2 = {"source." + x for x in s1}
Only printing the new strings:
for x in s1:
print("source." + x)
Note: The shown desired result is a new set with updated comma-seperated values. Further down you mentioned: "I don't need a new set, just output the new values". Which one is it? Below an option to mimic your desired result:
import re
set = {'Product, Product_Source_System, Product_Number'}
set = {re.sub(r'^|(,\s*)', r'\1source.', list(set)[0])}
# set = {'source.'+ list(set)[0].replace(', ', ', source.')}
print(set)
Prints:
{'source.Product, source.Product_Source_System, source.Product_Number'}
I have read in data from a basic txt file. The data is time and date in this form "DD/HHMM" (meteorological date and time data). I have read this data into a list: time[]. It prints out as you would imagine like so: ['15/1056', '15/0956', '15/0856', .........]. Is there a way to alter the list so that it ends up just having the time, basically removing the date and the forward slash, like so: ['1056', '0956', '0856',.........]? I have already tried list.split but thats not how that works I don't think. Thanks.
I'm still learning myself and I haven't touched python in sometime, BUT, my solution if you really need one:
myList = ['15/1056', '15/0956', '15/0856']
newList = []
for x in mylist:
newList.append(x.split("/")[1])
# splits at '/'
# returns ["15", "1056"]
# then appends w/e is at index 1
print(newList) # for verification
Considering having this list:
data = ["http://x.com/", "http://x.com/some/dir/", "http://x.com/other", "http://y.com/something", "http://y.com/else"]
I want to remove duplicates that are matched so expected output is:
http://x.com/
http://y.com/something
I know about list(set(data)) trick but it wouldn't work for this case.
I thought of iterating and making it in dict as key : value form so domain is key and value is the whole url and only take one occurance but I think that technique is crappy and not pythonic.
This gets you one entry per domain (happens to be the last, not the first):
from urllib.parse import urlparse
data = ["http://x.com/", "http://x.com/some/dir/", "http://x.com/other", "http://y.com/something", "http://y.com/else"]
result = list({urlparse(url).netloc: url for url in data}.values())
If you prefer the first:
result = list({urlparse(url).netloc: url for url in reversed(data)}.values())
print(result)
Outcome:
['http://y.com/something', 'http://x.com/']
This works as follows:
urlparse('https://somedomain.com/some/path') will break down the URL and one of the parts .netloc is the domain you're after, i.e. 'somedomain.com'
{urlparse(url).netloc: url for url in reversed(data)} reverses the list data and then for each url in the list, it gets the domain and adds an entry to a dictionary that's being constructed with the domain as the key and the URL as the value; since keys in a dictionary have to be unique, every type the same domain comes up, the entry is overwritten (hence the reversal)
list(somedict.values()) just takes the values of the dictionary and turns them into a simple list.
So, that explains how result = list({urlparse(url).netloc: url for url in data}.values()) results in the same as result = ['http://y.com/something', 'http://x.com/'] for your input data.
I have the following problem:
list1=['xyz','xyz2','other_randoms']
list2=['xyz']
I need to find which elements of list2 are in list1. In actual fact the elements of list1 correspond to a numerical value which I need to obtain then change. The problem is that 'xyz2' contains 'xyz' and therefore matches also with a regular expression.
My code so far (where 'data' is a python dictionary and 'specie_name_and_initial_values' is a list of lists where each sublist contains two elements, the first being specie name and the second being a numerical value that goes with it):
all_keys = list(data.keys())
for i in range(len(all_keys)):
if all_keys[i]!='Time':
#print all_keys[i]
pattern = re.compile(all_keys[i])
for j in range(len(specie_name_and_initial_values)):
print re.findall(pattern,specie_name_and_initial_values[j][0])
Variations of the regular expression I have tried include:
pattern = re.compile('^'+all_keys[i]+'$')
pattern = re.compile('^'+all_keys[i])
pattern = re.compile(all_keys[i]+'$')
And I've also tried using 'in' as a qualifier (i.e. within a for loop)
Any help would be greatly appreciated. Thanks
Ciaran
----------EDIT------------
To clarify. My current code is below. its used within a class/method like structure.
def calculate_relative_data_based_on_initial_values(self,copasi_file,xlsx_data_file,data_type='fold_change',time='seconds'):
copasi_tool = MineParamEstTools()
data=pandas.io.excel.read_excel(xlsx_data_file,header=0)
#uses custom class and method to get the list of lists from a file
specie_name_and_initial_values = copasi_tool.get_copasi_initial_values(copasi_file)
if time=='minutes':
data['Time']=data['Time']*60
elif time=='hour':
data['Time']=data['Time']*3600
elif time=='seconds':
print 'Time is already in seconds.'
else:
print 'Not a valid time unit'
all_keys = list(data.keys())
species=[]
for i in range(len(specie_name_and_initial_values)):
species.append(specie_name_and_initial_values[i][0])
for i in range(len(all_keys)):
for j in range(len(specie_name_and_initial_values)):
if all_keys[i] in species[j]:
print all_keys[i]
The table returned from pandas is accessed like a dictionary. I need to go to my data table, extract the headers (i.e. the all_keys bit), then look up the name of the header in the specie_name_and_initial_values variable and obtain the corresponding value (the second element within the specie_name_and_initial_value variable). After this, I multiply all values of my data table by the value obtained for each of the matched elements.
I'm most likely over complicating this. Do you have a better solution?
thanks
----------edit 2 ---------------
Okay, below are my variables
all_keys = set([u'Cyp26_G_R1', u'Cyp26_G_rep1', u'Time'])
species = set(['[Cyp26_R1R2_RARa]', '[Cyp26_SRC3_1]', '[18-OH-RA]', '[p38_a]', '[Cyp26_G_rep1]', '[Cyp26]', '[Cyp26_G_a]', '[SRC3_p]', '[mRARa]', '[np38_a]', '[mRARa_a]', '[RARa_pp_TFIIH]', '[RARa]', '[Cyp26_G_L2]', '[atRA]', '[atRA_c]', '[SRC3]', '[RARa_Ser369p]', '[p38]', '[Cyp26_mRNA]', '[Cyp26_G_L]', '[TFIIH]', '[Cyp26_SRC3_2]', '[Cyp26_G_R1R2]', '[MSK1]', '[MSK1_a]', '[Cyp26_G]', '[Basal_Kinases]', '[Cyp26_R1_RARa]', '[4-OH-RA]', '[Cyp26_G_rep2]', '[Cyp26_Chromatin]', '[Cyp26_G_R1]', '[RXR]', '[SMRT]'])
You don't need a regex to find common elements, set.intersection will find all elements in list2 that are also in list1:
list1=['xyz','xyz2','other_randoms']
list2=['xyz']
print(set(list2).intersection(list1))
set(['xyz'])
Also if you wanted to compare 'xyz' to 'xyz2' you would use == not in and then it would correctly return False.
You can also rewrite your own code a lot more succinctly, :
for key in data:
if key != 'Time':
pattern = re.compile(val)
for name, _ in specie_name_and_initial_values:
print re.findall(pattern, name)
Based on your edit you have somehow managed to turn lists into strings, one option is to strip the []:
all_keys = set([u'Cyp26_G_R1', u'Cyp26_G_rep1', u'Time'])
specie_name_and_initial_values = set(['[Cyp26_R1R2_RARa]', '[Cyp26_SRC3_1]', '[18-OH-RA]', '[p38_a]', '[Cyp26_G_rep1]', '[Cyp26]', '[Cyp26_G_a]', '[SRC3_p]', '[mRARa]', '[np38_a]', '[mRARa_a]', '[RARa_pp_TFIIH]', '[RARa]', '[Cyp26_G_L2]', '[atRA]', '[atRA_c]', '[SRC3]', '[RARa_Ser369p]', '[p38]', '[Cyp26_mRNA]', '[Cyp26_G_L]', '[TFIIH]', '[Cyp26_SRC3_2]', '[Cyp26_G_R1R2]', '[MSK1]', '[MSK1_a]', '[Cyp26_G]', '[Basal_Kinases]', '[Cyp26_R1_RARa]', '[4-OH-RA]', '[Cyp26_G_rep2]', '[Cyp26_Chromatin]', '[Cyp26_G_R1]', '[RXR]', '[SMRT]'])
specie_name_and_initial_values = set(s.strip("[]") for s in specie_name_and_initial_values)
print(all_keys.intersection(specie_name_and_initial_values))
Which outputs:
set([u'Cyp26_G_R1', u'Cyp26_G_rep1'])
FYI, if you had lists inside the set you would have gotten an error as lists are mutable so are not hashable.
I'm quite new to programming so I'm sure there's a terser way to pose this, but I'm trying to create a personal bookmarking program. Given multiple urls each with a list of tags ordered by relevance, I want to be able to create a search consisting of a list of tags that returns a list of most relevant urls. My first solution, below, is to give the first tag a value of 1, the second 2, and so on & let the python list sort function do the rest. 2 questions:
1) Is there a much more elegant/efficient way of doing this (embarrass me!)
2) Any other general approaches to the sorting by relevance given the inputs above problem?
Much obliged.
# Given a list of saved urls each with a corresponding user-generated taglist
# (ordered by relevance), the user enters a "search" list-of-tags, and is
# returned a sorted list of urls.
# Generate sample "content" linked-list-dictionary. The rationale is to
# be able to add things like 'title' etc at later stages and to
# treat each url/note as in independent entity. But a single dictionary
# approach like "note['url1']=['b','a','c','d']" might work better?
content = []
note = {'url':'url1', 'taglist':['b','a','c','d']}
content.append(note)
note = {'url':'url2', 'taglist':['c','a','b','d']}
content.append(note)
note = {'url':'url3', 'taglist':['a','b','c','d']}
content.append(note)
note = {'url':'url4', 'taglist':['a','b','d','c']}
content.append(note)
note = {'url':'url5', 'taglist':['d','a','c','b']}
content.append(note)
# An example search term of tags, ordered by importance
# I'm using a dictionary with an ordinal number system
# This seems clumsy
search = {'d':1,'a':2,'b':3}
# Create a tagCloud with one entry for each tag that occurs
tagCloud = []
for note in content:
for tag in note['taglist']:
if tagCloud.count(tag) == 0:
tagCloud.append(tag)
# Create a dictionary that associates an integer value denoting
# relevance (1 is most relevant etc) for each existing tag
d={}
for tag in tagCloud:
try:
d[tag]=search[tag]
except KeyError:
d[tag]=100
# Create a [[relevance, tag],[],[],...] result list & sort
result=[]
for note in content:
resultNote=[]
for tag in note['taglist']:
resultNote.append([d[tag],tag])
resultNote.append(note['url'])
result.append(resultNote)
result.sort()
# Remove the relevance values & recreate a list containing
# the url string followed by corresponding tags.
# Its so hacky i've forgotten how it works!
# It's mostly for display, but suggestions on "best-practice"
# intermediate-form data storage?
finalResult=[]
for note in result:
temp=[]
temp.append(note.pop())
for tag in note:
temp.append(tag[1])
finalResult.append(temp)
print "Content: ", content
print "Search: ", search
print "Final Result: ", finalResult
1) Is there a much more elegant/efficient way of doing this (embarrass me!)
Sure thing. The basic idea: quit trying to tell Python what to do, and just ask it for what you want.
content = [
{'url':'url1', 'taglist':['b','a','c','d']},
{'url':'url2', 'taglist':['c','a','b','d']},
{'url':'url3', 'taglist':['a','b','c','d']},
{'url':'url4', 'taglist':['a','b','d','c']},
{'url':'url5', 'taglist':['d','a','c','b']}
]
search = {'d' : 1, 'a' : 2, 'b' : 3}
# We can create the tag cloud like this:
# tagCloud = set(sum((note['taglist'] for note in content), []))
# But we don't actually need it: instead, we'll just use a default value
# when looking things up in the 'search' dict.
# Create a [[relevance, tag],[],[],...] result list & sort
result = sorted(
[
[search.get(tag, 100), tag]
for tag in note['taglist']
] + [[note['url']]]
# The result will look like [ [relevance, tag],... , [url] ]
# Note that the url is wrapped in a list too. This makes the
# last processing step easier: we just take the last element of
# each nested list.
for note in content
)
# Remove the relevance values & recreate a list containing
# the url string followed by corresponding tags.
finalResult = [
[x[-1] for x in note]
for note in result
]
print "Content: ", content
print "Search: ", search
print "Final Result: ", finalResult
I suggest you also give a weight to each tag, depending on how rare it is (e.g. a “tarantula” tag would weigh more than a “nature” tag¹). For a given URL, rare tags that are common with other URLs should mark a stronger relevance, while frequently used tags of the given URL not existing in another URL should mark down the relevance.
It's easy to convert the rules I describe above as calculations of a numerical relevance for every other URL.
¹ unless all your URLs are related to “tarantulas”, of course :)