avg_sentence_length is function which calculate average length of a sentence
def avg_sentence_length(text):
""" (list of str) -> float
Precondition: text contains at least one sentence.
A sentence is defined as a non-empty string of non-terminating
punctuation surrounded by terminating punctuation or beginning or
end of file. Terminating punctuation is defined as !?.
Return the average number of words per sentence in text.
>>> text = ['The time has come, the Walrus said\n',
'To talk of many things: of shoes - and ships - and sealing wax,\n',
'Of cabbages; and kings.\n',
'And why the sea is boiling hot;\n',
'and whether pigs have wings.\n']
>>> avg_sentence_length(text)
17.5
"""
I think you are looking for something like this?
def averageSentence(sentence):
words = sentence.split()
average = sum(len(word) for word in words)/len(words)
print(average)
def main():
sentence = input("Enter Sentence: ")
averageSentence(sentence)
main()
output:
Enter Sentence: my name is something
4.25
I am using idle 3 and above. If you are working with python 2.7 or so, the code will be a little bit different.
We can use reduce and lambda.
from functools import reduce
def Average(l):
avg = reduce(lambda x, y: x + y, l) / len(l)
return(avg)
def AVG_SENT_LNTH(File):
SENTS = [i.split() for i in open(File).read().splitlines()]
Lengths = [len(i) for i in SENTS]
return(Average(Lengths))
print("Train\t", AVG_SENT_LNTH("Train.dat"))
Though the splitting process is totally conditional.
import doctest
import re
def avg_sentence_length(text):
r"""(list of str) -> float
Precondition: text contains at least one sentence.
A sentence is defined as a non-empty string of non-terminating
punctuation surrounded by terminating punctuation or beginning or
end of file. Terminating punctuation is defined as !?.
Return the average number of words per sentence in text.
>>> text = ['The time has come, the Walrus said\n',
... 'To talk of many things: of shoes - and ships - and sealing wax,\n',
... 'Of cabbages; and kings.\n',
... 'And why the sea is boiling hot;\n',
... 'and whether pigs have wings.\n']
>>> avg_sentence_length(text)
17.5
"""
terminating_punct = "[!?.]"
punct = r"\W" # non-word characters
sentences = [
s.strip() # without trailing whitespace
for s in re.split(
terminating_punct,
"".join(text).replace("\n", " "), # text as 1 string
)
if s.strip() # non-empty
]
def wordcount(s):
"""Split sentence s on punctuation
and return number of non-empty words
"""
return len([w for w in re.split(punct, s) if w])
return sum(map(wordcount, sentences)) / len(sentences)
# test the spec. I just made the docstring raw with 'r'
# and added ... where needed
doctest.run_docstring_examples(avg_sentence_length, globals())
use word_tokenize to find the words and then use a list comprehension to find alpha and digit words.
from nltk import word_tokenize
from functools import reduce
text = ['The time has come, the Walrus said\n',
'To talk of many things: of shoes - and ships - and sealing wax,\n',
'Of cabbages; and kings.\n',
'And why the sea is boiling hot;\n',
'and whether pigs have wings.\n']
sentences = [ [word for word in word_tokenize ( sent ) if word.isalpha() or
word.isdigit()] for sent in text]
counts=[]
for sentence in sentences:
counts.append(len(sentence))
#https://www.geeksforgeeks.org/reduce-in-python/
#takes the first two sequences of the list and adds then gets the next sequence and accumulates it until end of list.
def Average(list_counts):
avg = reduce(lambda x, y: x + y, list_counts) / len(list_counts)
return(avg)
print("Average words in the sentence is", Average(counts))
output:
[['The', 'time', 'has', 'come', 'the', 'Walrus', 'said'], ['To', 'talk', 'of', 'many', 'things', 'of', 'shoes', 'and', 'ships', 'and', 'sealing', 'wax'], ['Of', 'cabbages', 'and', 'kings'], ['And', 'why', 'the', 'sea', 'is', 'boiling', 'hot'], ['and', 'whether', 'pigs', 'have', 'wings']]
[7, 12, 4, 7, 5]
Average words in the sentence is 7.0
I have edited the code a bit in this answer but it should work (I uninstalled Python so I can't test sorry. (It was to make space on this rubbish laptop that only started with 28GB!) ) This is the code:
def findAverageSentenceLength(long1, medium2, short3):
S1LENGTH = long1.length
S2LENGTH = medium2.length
S3LENGTH = short3.length
ADDED_LENGTHS = S1LENGTH + S2LENGTH + S3lENGTH
AVERAGE = ADDED_LENGTHS / 3
print("The average sentence length is", AVERAGE, "!")
long1input = input("Enter a 17-30 word sentence.")
medium2input = input("Enter a 10-16 word sentence.")
short3input = input("Enter a 5-9 word sentence.")
findAverageSentenceLength(long1input, medium2input, short3input)
Hope this helps.
PS: This WILL only work in Python 3
Related
I have a list of lists which I want to see the frequency in a sentence:
words = [plates, will]
sentence = [the, plates, will, still, shift, and, the, clouds, will, still, spew,]
I want to count how many times a set of word has been mentioned in a list.
So from the list words [plates,will] is mentioned just 1 time in the sentence
I have a whole column which I want to iterate.
Desirable output is:
sentence
word
frequency
[the, plates, will, still, shift, and, the, clouds, will, still, spew,]
[plates ,will]
1
[the, plates, will, still, shift, and, the, clouds, will, still, spew,]
[still, spew]
1
I have tried this:
for word in word:
if word in sentence:
counts[word] += 1
else:
counts[word] = 1
also
[[word.count() for word in b if word in row] for row in b]
Any help for the right output?
This is not inline but it does the job I understood you asked for.
words = ["plates", "will"]
sentence = ["the", "plates", "will", "still", "shift", "and"]
count = 0
# Go through each word in the sentence
for si in range(len(sentence) - len(words)):
match = True
# Compare if the following words match
for wi, word in enumerate(words):
# Break if one word is wrong
if sentence[si + wi] != word:
match = False
break
if match:
count +=1
print(count)
I think that soultion with Counter is simplier.
from collections import Counter
words = ['plates', 'will']
sentence = ['the', 'plates', 'will', 'still', 'shift', 'and', 'the', 'clouds', 'will', 'still', 'spew',]
word_counts = Counter(sentence)
for word in words:
print(word, word_counts[word])
I tried using this code that I found online:
K=sentences
m=[len(i.split()) for i in K]
lengthorder= sorted(K, key=len, reverse=True)
#print(lengthorder)
#print("\n")
list1 = lengthorder
str1 = '\n'.join(list1)
print(str1)
print('\n')
Sentence1 = "We have developed speed, but we have shut ourselves in"
res = len(Sentence1.split())
print ("The longest sentence in this text contains" + ' ' + str(res) + ' ' + "words.")
Sentence2 = "More than cleverness we need kindness and gentleness"
res = len(Sentence2.split())
print ("The second longest sentence in this text contains" + ' ' + str(res) + ' ' + "words.")
Sentence3 = "Machinery that gives abundance has left us in want"
res = len(Sentence3.split())
print ("The third longest sentence in this text contains" + ' ' + str(res) + ' ' + "words.")
but it doesn't sort out the sentences per word number, but per actual length (as in cm)
You can simply iterate through the different sentaces and split them up into words like this:
text = " We have developed speed. but we have. shut ourselves in Machinery that. gives abundance has left us in want Our knowledge has made us cynical Our cleverness, hard and unkind We think too much and feel too little More than machinery we need humanity More than cleverness we need kindness and gentleness"
# split into sentances
text2array = text.split(".")
i =0
# interate through sentances and split them into words
for sentance in text2array:
text2array[i] = sentance.split(" ")
i += 1
# sort the sentances by word length
text2array.sort(key=len,reverse=True)
i = 0
#iterate through sentances and print them to screen
for sentance in text2array:
i += 1
sentanceOut = ""
for word in sentance:
sentanceOut += " " + word
sentanceOut += "."
print("the nr "+ str(i) +" longest sentence is" + sentanceOut)
You can define a function that uses the regex to obtain the number of words in a given sentence:
import re
def get_word_count(sentence: str) -> int:
return len(re.findall(r"\w+", sentence))
Assuming you already have a list of sentences, you can iterate the list and pass each sentence to the word count function then store each sentence and its word count in a dictionary:
sentences = [
"Assume that this sentence has one word. Really?",
"Assume that this sentence has more words than all sentences in this list. Obviously!",
"Assume that this sentence has more than one word. Duh!",
]
word_count_dict = {}
for sentence in sentences:
word_count_dict[sentence] = get_word_count(sentence)
At this point, the word_count_dict contains sentences as keys and their associated word count as values.
You can then sort word_count_dict by values:
sorted_word_count_dict = dict(
sorted(word_count_dict.items(), key=lambda item: item[1], reverse=True)
)
Here's the full snippet:
import re
def get_word_count(sentence: str) -> int:
return len(re.findall(r"\w+", sentence))
sentences = [
"Assume that this sentence has one word. Really?",
"Assume that this sentence has more words than all sentences in this list. Obviously!",
"Assume that this sentence has more than one word. Duh!",
]
word_count_dict = {}
for sentence in sentences:
word_count_dict[sentence] = get_word_count(sentence)
sorted_word_count_dict = dict(
sorted(word_count_dict.items(), key=lambda item: item[1], reverse=True)
)
print(sorted_word_count_dict)
Let's assume that your sentences are already separate and there is no need to detect the sentences.
So we have a list of sentences. Then we need to calculate the length of the sentence based on the word count. the basic way is to split them by space. So each space separates two words from each other in a sentence.
list_of_sen = ['We have developed speed, but we have shut ourselves in','Machinery that gives abundance has left us in want Our knowledge has made us cynical Our cleverness', 'hard and unkind We think too much and feel too little More than machinery we need humanity More than cleverness we need kindness and gentleness']
sen_len=[len(i.split()) for i in list_of_sen]
sen_len= sorted(sen_len, reverse=True)
for index , count in enumerate(sen_len):
print(f'The {index+1} longest sentence in this text contains {count} words')
But if your sentence is not separated, first we need to recognize the end of the sentence then split them. Your sample date does not contain any punctuation that can be useful to separate sentences. So if we assume that your data has punctuation the answer below can be helpful.
see this question
from nltk import tokenized
p = "Good morning Dr. Adams. The patient is waiting for you in room number 3."
tokenize.sent_tokenize(p)
I write a program in Python. The user enters a text message. It is necessary to check whether there is a sequence of words in this message. Sample. Message: "Hello world, my friend.". Check the sequence of these two words: "Hello", "world". The Result Is "True". But when checking the sequence of these words in the message: "Hello, beautiful world "the result is"false". When you need to check the presence of only two words it is possible as I did it in the code, but when combinations of 5 or more words is difficult. Is there any small solution to this problem?
s=message.text
s=s.lower()
lst = s.split()
elif "hello" in lst and "world" in lst :
if "hello" in lst:
c=lst.index("hello")
if lst[c+1]=="world" or lst[c-1]=="world":
E=True
else:
E=False
The straightforward way is to use a loop. Split your message into individual words, and then check for each of those in the sentence in general.
word_list = message.split() # this gives you a list of words to find
word_found = True
for word in word_list:
if word not in message2:
word_found = False
print(word_found)
The flag word_found is True iff all words were found in the sentence. There are many ways to make this shorter and faster, especially using the all operator, and providing the word list as an in-line expression.
word_found = all(word in message2 for word in message.split())
Now, if you need to restrict your "found" property to matching exact words, you'll need more preprocessing. The above code is too forgiving of substrings, such as finding "Are you OK ?" in the sentence "your joke is only barely funny". For the more restrictive case, you should break message2 into words, strip those words of punctuation, drop them to lower-case (to make matching easier), and then look for each word (from message) in the list of words from message2.
Can you take it from there?
I will clarify your requirement first:
ignore case
consecutive sequence
match in any order, like permutation or anagram
support duplicated words
if the number is not too large, you can try this easy-understanding but not the fastest way.
split all words in text message
join them with ' '
list all the permutation of words and join them with ' ' too, For
example, if you want to check sequence of ['Hello', 'beautiful', 'world']. The permutation will be 'Hello beautiful world',
'Hello world beautiful', 'beautiful Hello world'... and so on.
and you can just find whether there is one permutation such as
'hello beautiful world' is in it.
The sample code is here:
import itertools
import re
# permutations brute-force, O(nk!)
def checkWords(text, word_list):
# split all words without space and punctuation
text_words= re.findall(r"[\w']+", text.lower())
# list all the permutations of word_list, and match
for words in itertools.permutations(word_list):
if ' '.join(words).lower() in ' '.join(text_words):
return True
return False
# or use any, just one line
# return any(' '.join(words).lower() in ' '.join(text_words) for words in list(itertools.permutations(word_list)))
def test():
# True
print(checkWords('Hello world, my friend.', ['Hello', 'world', 'my']))
# False
print(checkWords('Hello, beautiful world', ['Hello', 'world']))
# True
print(checkWords('Hello, beautiful world Hello World', ['Hello', 'world', 'beautiful']))
# True
print(checkWords('Hello, beautiful world Hello World', ['Hello', 'world', 'world']))
But it costs a lot when words number is large, k words will generate k! permutation, the time complexity is O(nk!).
I think a more efficient solution is sliding window. The time complexity will decrease to O(n):
import itertools
import re
import collections
# sliding window, O(n)
def checkWords(text, word_list):
# split all words without space and punctuation
text_words = re.findall(r"[\w']+", text.lower())
counter = collections.Counter(map(str.lower, word_list))
start, end, count, all_indexes = 0, 0, len(word_list), []
while end < len(text_words):
counter[text_words[end]] -= 1
if counter[text_words[end]] >= 0:
count -= 1
end += 1
# if you want all the index of match, you can change here
if count == 0:
# all_indexes.append(start)
return True
if end - start == len(word_list):
counter[text_words[start]] += 1
if counter[text_words[start]] > 0:
count += 1
start += 1
# return all_indexes
return False
I don't know if that what you really need but this worked you can tested
message= 'hello world'
message2= ' hello beautiful world'
if 'hello' in message and 'world' in message :
print('yes')
else :
print('no')
if 'hello' in message2 and 'world' in message2 :
print('yes')
out put :
yes
yes
I have a string, for example 'i cant sleep what should i do'as well as a phrase that is contained in the string 'cant sleep'. What I am trying to accomplish is to get an n sized window around the phrase even if there isn't n words on either side. So in this case if I had a window size of 2 (2 words on either size of the phrase) I would want 'i cant sleep what should'.
This is my current solution attempting to find a window size of 2, however it fails when the number of words to the left or right of the phrase is less than 2, I would also like to be able to use different window sizes.
import re
sentence = 'i cant sleep what should i do'
phrase = 'cant sleep'
words = re.findall(r'\w+', sentence)
phrase_words = re.findall(r'\w+', phrase)
print sentence_words[left-2:right+3]
left = sentence_words.index(span_words[0])
right = sentence_words.index(span_words[-1])
print sentence_words[left-2:right+3]
You can use the partition method for a non-regex solution:
>>> s='i cant sleep what should i do'
>>> p='cant sleep'
>>> lh, _, rh = s.partition(p)
Then use a slice to get up to two words:
>>> n=2
>>> ' '.join(lh.split()[:n]), p, ' '.join(rh.split()[:n])
('i', 'cant sleep', 'what should')
Your exact output:
>>> ' '.join(lh.split()[:n]+[p]+rh.split()[:n])
'i cant sleep what should'
You would want to check whether p is in s or if the partition succeeds of course.
As pointed out in comments, lh should be a negative to take the last n words (thanks Mathias Ettinger):
>>> s='w1 w2 w3 w4 w5 w6 w7 w8 w9'
>>> p='w4 w5'
>>> n=2
>>> ' '.join(lh.split()[-n:]+[p]+rh.split()[:n])
'w2 w3 w4 w5 w6 w7'
If you define words being entities separated by spaces you can split your sentences and use regular python slicing:
def get_window(sentence, phrase, window_size):
sentence = sentence.split()
phrase = phrase.split()
words = len(phrase)
for i,word in enumerate(sentence):
if word == phrase[0] and sentence[i:i+words] == phrase:
start = max(0, i-window_size)
return ' '.join(sentence[start:i+words+window_size])
sentence = 'i cant sleep what should i do'
phrase = 'cant sleep'
print(get_window(sentence, phrase, 2))
You can also change it to a generator by changing return to yield and be able to generate all windows if several match of phrase are in sentence:
>>> list(gen_window('I dont need it, I need to get rid of it', 'need', 2))
['I dont need it, I', 'it, I need to get']
import re
def contains_sublist(lst, sublst):
n = len(sublst)
for i in xrange(len(lst)-n+1):
if (sublst == lst[i:i+n]):
a = max(i, i-2)
b = min(i+n+2, len(lst))
return ' '.join(lst[a:b])
sentence = 'i cant sleep what should i do'
phrase = 'cant sleep'
sentence_words = re.findall(r'\w+', sentence)
phrase_words = re.findall(r'\w+', phrase)
print contains_sublist(sentence_words, phrase_words)
you can split words using inbuilt string methods, so re shouldn't be nessesary. If you want to define varrring values, then wrap it in a function call like so:
def get_word_window(sentence, phrase, w_left=0, w_right=0):
w_lst = sentence.split()
p_lst = phrase.split()
for i,word in enumerate(w_lst):
if word == p_lst[0] and \
w_lst[i:i+len(p_lst)] == p_lst:
left = max(0, i-w_left)
right = min(len(w_lst), i+w_right+len(p_list)
return w_lst[left:right]
Then you can get the new phrase like so:
>>> sentence='i cant sleep what should i do'
>>> phrase='cant sleep'
>>> ' '.join(get_word_window(sentence,phrase,2,2))
'i cant sleep what should'
Is there a way to remove duplicate and continuous words/phrases in a string? E.g.
[in]: foo foo bar bar foo bar
[out]: foo bar foo bar
I have tried this:
>>> s = 'this is a foo bar bar black sheep , have you any any wool woo , yes sir yes sir three bag woo wu wool'
>>> [i for i,j in zip(s.split(),s.split()[1:]) if i!=j]
['this', 'is', 'a', 'foo', 'bar', 'black', 'sheep', ',', 'have', 'you', 'any', 'wool', 'woo', ',', 'yes', 'sir', 'yes', 'sir', 'three', 'bag', 'woo', 'wu']
>>> " ".join([i for i,j in zip(s.split(),s.split()[1:]) if i!=j]+[s.split()[-1]])
'this is a foo bar black sheep , have you any wool woo , yes sir yes sir three bag woo wu'
What happens when it gets a little more complicated and i want to remove phrases (let's say phrases can be made up of up to 5 words)? how can it be done? E.g.
[in]: foo bar foo bar foo bar
[out]: foo bar
Another example:
[in]: this is a sentence sentence sentence this is a sentence where phrases phrases duplicate where phrases duplicate . sentence are not prhases .
[out]: this is a sentence where phrases duplicate . sentence are not prhases .
You can use re module for that.
>>> s = 'foo foo bar bar'
>>> re.sub(r'\b(.+)\s+\1\b', r'\1', s)
'foo bar'
>>> s = 'foo bar foo bar foo bar'
>>> re.sub(r'\b(.+)\s+\1\b', r'\1', s)
'foo bar foo bar'
If you want to match any number of consecutive occurrences:
>>> s = 'foo bar foo bar foo bar'
>>> re.sub(r'\b(.+)(\s+\1\b)+', r'\1', s)
'foo bar'
Edit. An addition for your last example. To do so you'll have to call re.sub while there're duplicate phrases. So:
>>> s = 'this is a sentence sentence sentence this is a sentence where phrases phrases duplicate where phrases duplicate'
>>> while re.search(r'\b(.+)(\s+\1\b)+', s):
... s = re.sub(r'\b(.+)(\s+\1\b)+', r'\1', s)
...
>>> s
'this is a sentence where phrases duplicate'
I love itertools. It seems like every time I want to write something, itertools already has it. In this case, groupby takes a list and groups repeated, sequential items from that list into a tuple of (item_value, iterator_of_those_values). Use it here like:
>>> s = 'this is a foo bar bar black sheep , have you any any wool woo , yes sir yes sir three bag woo wu wool'
>>> ' '.join(item[0] for item in groupby(s.split()))
'this is a foo bar black sheep , have you any wool woo , yes sir yes sir three bag woo wu wool'
So let's extend that a little with a function that returns a list with its duplicated repeated values removed:
from itertools import chain, groupby
def dedupe(lst):
return list(chain(*[item[0] for item in groupby(lst)]))
That's great for one-word phrases, but not helpful for longer phrases. What to do? Well, first, we'll want to check for longer phrases by striding over our original phrase:
def stride(lst, offset, length):
if offset:
yield lst[:offset]
while True:
yield lst[offset:offset + length]
offset += length
if offset >= len(lst):
return
Now we're cooking! OK. So our strategy here is to first remove all the single-word duplicates. Next, we'll remove the two-word duplicates, starting from offset 0 then 1. After that, three-word duplicates starting at offsets 0, 1, and 2, and so on until we've hit five-word duplicates:
def cleanse(list_of_words, max_phrase_length):
for length in range(1, max_phrase_length + 1):
for offset in range(length):
list_of_words = dedupe(stride(list_of_words, offset, length))
return list_of_words
Putting it all together:
from itertools import chain, groupby
def stride(lst, offset, length):
if offset:
yield lst[:offset]
while True:
yield lst[offset:offset + length]
offset += length
if offset >= len(lst):
return
def dedupe(lst):
return list(chain(*[item[0] for item in groupby(lst)]))
def cleanse(list_of_words, max_phrase_length):
for length in range(1, max_phrase_length + 1):
for offset in range(length):
list_of_words = dedupe(stride(list_of_words, offset, length))
return list_of_words
a = 'this is a sentence sentence sentence this is a sentence where phrases phrases duplicate where phrases duplicate . sentence are not prhases .'
b = 'this is a sentence where phrases duplicate . sentence are not prhases .'
print ' '.join(cleanse(a.split(), 5)) == b
txt1 = 'this is a foo bar bar black sheep , have you any any wool woo , yes sir yes sir three bag woo wu wool'
txt2 = 'this is a sentence sentence sentence this is a sentence where phrases phrases duplicate where phrases duplicate'
def remove_duplicates(txt):
result = []
for word in txt.split():
if word not in result:
result.append(word)
return ' '.join(result)
Ouput:
In [7]: remove_duplicate_words(txt1)
Out[7]: 'this is a foo bar black sheep , have you any wool woo yes sir three bag wu'
In [8]: remove_duplicate_words(txt2)
Out[8]: 'this is a sentence where phrases duplicate'
Personally, I do not think we need to use any other modules for this (although I admit some of them are GREAT). I just managed this with simple looping by first converting the string into a list. I tried it on all the examples listed above. It works fine.
sentence = str(raw_input("Please enter your sentence:\n"))
word_list = sentence.split()
def check_if_same(i,j): # checks if two sets of lists are the same
global word_list
next = (2*j)-i # this gets the end point for the second of the two lists to compare (it is essentially j + phrase_len)
is_same = False
if word_list[i:j] == word_list[j:next]:
is_same = True
# The line below is just for debugging. Prints lists we are comparing and whether it thinks they are equal or not
#print "Comparing: " + ' '.join(word_list[i:j]) + " " + ''.join(word_list[j:next]) + " " + str(answer)
return is_same
phrase_len = 1
while phrase_len <= int(len(word_list) / 2): # checks the sentence for different phrase lengths
curr_word_index=0
while curr_word_index < len(word_list): # checks all the words of the sentence for the specified phrase length
result = check_if_same(curr_word_index, curr_word_index + phrase_len) # checks similarity
if result == True:
del(word_list[curr_word_index : curr_word_index + phrase_len]) # deletes the repeated phrase
else:
curr_word_index += 1
phrase_len += 1
print "Answer: " + ' '.join(word_list)
With a pattern similar to sharcashmo's pattern, you can use subn that returns the number of replacements, inside a while loop :
import re
txt = r'this is a sentence sentence sentence this is a sentence where phrases phrases duplicate where phrases duplicate . sentence are not phrases .'
pattern = re.compile(r'(\b\w+(?: \w+)*)(?: \1)+\b')
repl = r'\1'
res = txt
while True:
[res, nbr] = pattern.subn(repl, res)
if (nbr == 0):
break
print res
When there is no more replacements the while loop stops.
With this method you can get all overlapped matches (that is impossible with a single pass in a replacement context), without testing two times the same pattern.
This should fix any number of adjacent duplicates, and works with both of your examples. I convert the string to a list, fix it, then convert back to a string for output:
mywords = "foo foo bar bar foo bar"
list = mywords.split()
def remove_adjacent_dups(alist):
result = []
most_recent_elem = None
for e in alist:
if e != most_recent_elem:
result.append(e)
most_recent_elem = e
to_string = ' '.join(result)
return to_string
print remove_adjacent_dups(list)
Output:
foo bar foo bar