Printing a dynamic and large list in Python - python

I am trying to print a large list like a sequence of charterers or subtitle in TV. I have attached the following photo to understand more:
And as you can see the input text should rotate from right to left.
To arrive at this point I wrote the following code:
from __future__ import print_function
import os
from time import sleep
text = """A company which supplied lingerie to the Queen has lost its royal warrant over a book which revealed details of royal bra fittings.Rigby & Peller,
a luxury underwear firm founded in London, had held the royal warrant since 1960. It was withdrawn after June Kenton, who fitted bras for the Quee
n, released a book called 'Storm in a D-Cup'.Mrs Kenton said there was "nothing" in the book to "be upset about", adding that it was an "unbelieva
ble" decision.Buckingham Palace said it did not comment on individual companies"""
whole_Str = []
i = 0
while(i < len(text)):
rest = text[i:i+50]
i+=1
whole_Str.append(rest)
for sequence in whole_Str:
print (sequence, end ='\r')
sleep(0.1)
However, there are two problems:
First, python editors do not allow me to over write different parts of this text on the same place.
Second and most important, I guess sleep() is not a reasonable solution to create a delay since for a large scale text, system should not stop to print just subtitles.
Any kind of help would be greatly appreciated

maybe your first question is solved by this:
import sys
from time import sleep
text = """dummy text"""
whole_Str = text
for i in range(len(text)-50):
sys.stdout.write("\r" + text[i:i+50])
sys.stdout.flush()
sleep(0.1)
Not sure though what you mean with the second one.

I was able to get it to work with the curses library. the code looks like this.
import time
import curses
text = """A company which supplied lingerie to the Queen has lost its royal warrant over a book which revealed details of royal bra fittings.Rigby & Peller,
a luxury underwear firm founded in London, had held the royal warrant since 1960. It was withdrawn after June Kenton, who fitted bras for the Quee
n, released a book called 'Storm in a D-Cup'.Mrs Kenton said there was "nothing" in the book to "be upset about", adding that it was an "unbelieva
ble" decision.Buckingham Palace said it did not comment on individual companies"""
def pbar(window):
for i in text:
#add a character to the string
window.addstr(i)
#refresh the window so the new character appears
window.refresh()
#sleep to get the scroll effect
time.sleep(0.5)
#start the printing
curses.wrapper(pbar)
This is give the effect of a horizontally scrolling text that scrolls from left to right.

Related

Analysing English text with some French name

I'm dealing with the Well-known novel of Victor Hugo "Les Miserables".
A part of my project is to detect the existence of each of the novel's character in a sentence and count them. This can be done easily by something like this:
def character_frequency(character_per_sentences_dict,):
characters_frequency = OrderedDict([])
for k, v in character_per_sentences_dict.items():
if len(v) != 0:
characters_frequency[k] = len(v)
return characters_frequency, characters_in_vol
This pies of could works well for all of the characters except "Èponine".
I also read the text with the following piece code:
import codecs
import nltk.tokenize
with open(path_to_volume + '.txt', 'r', encoding='latin1') as fp:
novel = ' '.join(fp.readlines())
# Tokenize sentences and calculate the number of sentences
sentences = sent_tokenize(novel)
num_volume = path_to_volume.split("-v")[-1]
I should add that the dictation of "Èponine" is the same everywhere.
Any idea what's going on ?!
Here is a sample in which this name apears:
" ONE SHOULD ALWAYS BEGIN BY ARRESTING THE VICTIMS
At nightfall, Javert had posted his men and had gone into ambush himself between the trees of the Rue de la Bar­rieredes-Gobelins which faced the Gorbeau house, on the other side of the boulevard. He had begun operations by opening his pockets, and dropping into it the two young girls who were charged with keeping a watch on the ap­proaches to the den. But he had only caged Azelma. As for Èponine, she was not at her post, she had disappeared, and he had not been able to seize her. Then Javert had made a point and had bent his ear to waiting for the signal agreed upon. The comings and goings of the fiacres had greatly agi­tated him. At last, he had grown impatient, and, sure that there was a nest there, sure of being in luck, having recog­nized many of the ruffians who had entered, he had finally decided to go upstairs without waiting for the pistol-shot."
I agree with #BoarGules that there is likely a more efficient and effective way to approach this problem. With that said, I'm not sure what your problem is here. Python is fully Unicode supportive. You can "just do it" in terms of using Unicode in your program logic using Python's standard string ops and libraries.
For example, this works:
#!/usr/bin/env python
import requests
names = [
u'Éponine',
u'Cosette'
]
# Retrieve Les Misérables from Project Gutenberg
t = requests.get("http://www.gutenberg.org/files/135/135-0.txt").text
for name in names:
c = t.count(name)
print("{}: {}".format(name, c))
Results:
Éponine: 81
Cosette: 1004
I obviously don't have the text you have, so I don't know if how it is encoded, or how it is being read is the problem. I can't test that without having it. In this code, I get the source text off the internet. My point is just that non-ASCII characters should not pose any impediment to you as long as your inputs are reasonable.
All of the time to run this is spent reading the text. I think even if you added dozens of names, it wouldn't add up to a noticeable delay on any decent computer. So this method works just fine.

I'm a newb in Python and data-mining. Have issues regarding tokenizer & data type issues

Hi~ I am having a problem while I am trying to tokenize facebook comments which are in CSV format. I have my CSV data ready, and I completed reading the file.
I am using Anaconda3; Python 3.5. (My CSV data has about 20k in rows and 1 in cols)
The codes are,
import csv
from nltk import sent_tokenize, word_tokenize as sent_tokenize, word_tokenize
with open('facebook_comments_samsung.csv', 'r') as f:
reader = csv.reader(f)
your_list = list(reader) #list(reader)
print (your_list)
What comes, as a result, is something like this:
[['comment_message'], ['b"Yet again been told a pack of lies by Samsung Customer services who have lost my daughters phone and couldn\'t care less. ANYONE WHO PURCHASES ANYTHING FROM THIS COMPANY NEEDS THEIR HEAD TESTED"'], ["b'You cannot really blame an entire brand worldwide for a problem caused by a branch. It is a problem yes, but address your local problem branch'"], ["b'Haha!! Sorry if they lost your daughters phone but I will always buy Samsung products no matter what.'"], ["b'Salim Gaji BEST REPLIE EVER \\xf0\\x9f\\x98\\x8e'"], ["b'<3 Bewafa zarge <3 \\r\\n\\n \\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x93\\r\\n\\xf0\\x9f\\x8e\\xad\\xf0\\x9f\\x91\\x89 AQIB-BOT.ML \\xf0\\x9f\\x91\\x88\\xf0\\x9f\\x8e\\xadMANUAL\\xe2\\x99\\xaaKing.Bot\\xe2\\x84\\xa2 \\r\\n\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x94\\xe2\\x80\\x93\\xe2\\x80\\x94'"], ["b'\\xf0\\x9f\\x8c\\x90 LATIF.ML \\xf0\\x9f\\x8c\\x90'"], ['b"I\'m just waiting here patiently for you guys to say that you\'ll be releasing the s8 and s8+ a week early, for those who pre-ordered. Wishful thinking \\xf0\\x9f\\x98\\x86. Can\'t wait!"'], ['b"That\'s some good positive thinking there sir."'], ["b'(y) #NextIsNow #DoWhatYouCant'"], ["b'looking good'"], ['b"I\'ve always thought that when I first set eyes on my first born that I\'d like it to be on the screen of a cameraphone at arms length rather than eye-to-eye while holding my child. Thank you Samsung for improving our species."'], ["b'cool story'"], ["b'I believe so!'"], ["b'superb'"], ["b'Nice'"], ["b'thanks for the share'"], ["b'awesome'"], ["b'How can I talk to Samsung'"], ["b'Wow'"], ["b'#DoWhatYouCant siempre grandes innovadores Samsung Mobile'"], ["b'I had a problem with my s7 edge when I first got it all fixed now. However when I went to the Samsung shop they were useless and rude they refused to help and said there is nothing they could do no wonder the shop was dead quiet'"], ["b'Zeeshan Khan Masti Khel'"], ["b'I dnt had any problem wd my phn'"], ["b'I have maybe just had a bad phone to start with until it got fixed eventually. I had to go to carphone warehouse they were very helpful'"], ["b'awesome'"], ["b'Ch Shuja Uddin'"], ["b'akhheeerrr'"], ["b'superb'"], ["b'nice story'"], ["b'thanks for the share'"], ["b'superb'"], ["b'thanks for the share'"], ['b"On February 18th 2017 I sent my phone away to with a screen issue. The lower part of the screen was flickering bright white. The phone had zero physical damage to the screen\\n\\nI receive an email from Samsung Quotations with a picture of my SIM tray. Upon phoning I was told my SIM tray was stuck inside the phone and was handed a \\xc2\\xa392.14 repair bill. There is no way that my SIM tray was stuck in the phone as I removed my SIM and memory card before sending the phone away.\\n\\nAfter numerous calls I finally gave in and agreed to pay the \\xc2\\xa392.14 on the understanding that my screen repair would also be covered in this cost. This was confirmed to me by the person on the phone.\\n\\nOn
Sorry for your inconvenience in reading the result. My bad.
To continue, I added,
tokens = [word_tokenize(i) for i in your_list]
for i in tokens:
print (i)
print (tokens)
This is the part where I get the following error:
C:\Program Files\Anaconda3\lib\site-packages\nltk\tokenize\punkt.py in _slices_from_text(self, text) in line 1278 TypeError: expected string or bytes-like object
What I want to do next is,
import nltk
en = nltk.Text(tokens)
print(len(en.tokens))
print(len(set(en.tokens)))
en.vocab()
en.plot(50)
en.count('galaxy s8')
And finally, I want to draw a wordcloud based on the data.
Being aware of the fact that every seconds of your time is precious, I am terribly sorry to ask for your help. I have been working this for a couple of days, and cannot find the right solution for my problem. Thank you for reading.
The error you're getting is because your CSV file is turned into a list of lists-- one for each row in the file. The file only contains one column, so each of these lists has one element: The string containing the message you want to tokenize. To get past the error, unpack the sublists by using this line instead:
tokens = [word_tokenize(row[0]) for row in your_list]
After that, you'll need to learn some more python and learn how to examine your program and your variables.

Two input, one output Button

I am trying to make a button that takes in the state a person lives in and their race and outputs their life expectancy. I have a long list of data for all the states and various races. I also have my button set up, but need to find a way to use this data in an efficient way that allows the button to take in the two pieces of information and output one number, the life expectancy that correctly corresponds.
Thanks!
Here is what I have so far-
def life_expectancy_race(b):
'''This will tell you your life expectancy based on the information plugged into the boxes.'''
#number = print(text_input1.value) and print(text_input2.value)
display.clear_output()
display.display(display.Latex("Your life expectancy is {} y ears".format(life)))
text_input1= widgets.Text(description='Enter the state you live in here')
text_input2= widgets.Text(description='Enter your race here (White, Native American, Latino, Asian American, African American )')
button = widgets.Button(description='What is my life expectancy?')
button.on_click(life_expectancy_race)
display.display(text_input1)
display.display(text_input2)
display.display(button)
Really not sure what your asking are you asking but would something like this work? I'm not saying that specifically, just whipped that up :/ but something in that general thought process. Sorry if I misunderstood your question.
white = 30
America = 40
Print (Text1 + text2)

Parsing txt file in python where it is hard to split by delimiter

I am new to python, and am wondering if anyone can help me with some file loading.
Situation is I have some text files and i'm trying to do sentiment analysis. Here's the text file. It is split into three category: <department>, <user>, <review>
Here are some sample data:
men peter123 the pants are too tight for my liking!
kids georgel i really like this toy, it keeps my kid entertained for days! It is affordable and comes on time, i strongly recommend it
health kksd1 the health pills is drowsy by nature, please take care and do not drive after you eat the pills
office ty7d1 the printer came on time, the only problem with it is with the duplex function which i suspect its not really working
I want to make into this
<category> <user> <review>
I have 50k lines of these data.
I have tried to load directly into numpy, but it says its an empty separator error. I looked up stackoverflow, but i couldn't find a situation where it applies to different number of delimiters. For instance, i will never get to know how many spaces are there in the data set that i have.
My biggest problem is, how do you count the number of delimiters and give them column. Is there a way that I can make into three categories <department>, <user>, <review>. Bear in mind that the review data can contain random commas and spaces which i can't control. So the system must be smart enough to pick up!
Any ideas? Is there a way that i can tell python that after you read the user data, then everything behind falls under review?
With data like this I'd just use split() with the maxplit argument:
If maxsplit is given, at most maxsplit splits are done (thus, the list will have at most maxsplit+1 elements).
Example:
from StringIO import StringIO
s = StringIO("""men peter123 the pants are too tight for my liking!
kids georgel i really like this toy, it keeps my kid entertained for days! It is affordable and comes on time, i strongly recommend it
health kksd1 the health pills is drowsy by nature, please take care and do not drive after you eat the pills
office ty7d1 the printer came on time, the only problem with it is with the duplex function which i suspect its not really working""")
for line in s:
category, user, review = line.split(None, 2)
print ("category: {} - user: {} - review: '{}'".format(category,
user,
review.strip()))
The output is:
category: men - user: peter123 - review: 'the pants are too tight for my liking!'
category: kids - user: georgel - review: 'i really like this toy, it keeps my kid entertained for days! It is affordable and comes on time, i strongly recommend it'
category: health - user: kksd1 - review: 'the health pills is drowsy by nature, please take care and do not drive after you eat the pills'
category: office - user: ty7d1 - review: 'the printer came on time, the only problem with it is with the duplex function which i suspect its not really working'
For reference:
https://docs.python.org/2/library/stdtypes.html#str.split
What about doing it sorta manually:
data = []
for line in input_data:
tmp_split = line.split(" ")
#Get the first part (dept)
dept = tmp_split[0]
#get the 2nd part
user = tmp_split[1]
#everything after is the review - put spaces inbetween each piece
review = " ".join(tmp_split[2:])
data.append([dept, user, review])

Preserving line breaks when parsing with Scrapy in Python

I've written a Scrapy spider that extracts text from a page. The spider parses and outputs correctly on many of the pages, but is thrown off by a few. I'm trying to maintain line breaks and formatting in the document. Pages such as http://www.state.gov/r/pa/prs/dpb/2011/04/160298.htm are formatted properly like such:
April 7, 2011
Mark C. Toner
2:03 p.m. EDT
MR. TONER: Good afternoon, everyone. A couple of things at the top,
and then I’ll take your questions. We condemn the attack on innocent
civilians in southern Israel in the strongest possible terms, as well
as ongoing rocket fire from Gaza. As we have reiterated many times,
there’s no justification for the targeting of innocent civilians,
and those responsible for these terrorist acts should be held
accountable. We are particularly concerned about reports that indicate
the use of an advanced anti-tank weapon in an attack against civilians
and reiterate that all countries have obligations under relevant
United Nations Security Council resolutions to prevent illicit
trafficking in arms and ammunition. Also just a brief statement --
QUESTION: Can we stay on that just for one second?
MR. TONER: Yeah. Go ahead, Matt.
QUESTION: Apparently, the target of that was a school bus. Does that
add to your outrage?
MR. TONER: Well, any attack on innocent civilians is abhorrent, but
certainly the nature of the attack is particularly so.
While pages like http://www.state.gov/r/pa/prs/dpb/2009/04/121223.htm have output like this with no line breaks:
April 2, 2009
Robert Wood
11:53 a.m. EDTMR. WOOD: Good morning, everyone. I think it’s just
about still morning. Welcome to the briefing. I don’t have anything,
so – sir.QUESTION: The North Koreans have moved fueling tankers, or
whatever, close to the site. They may or may not be fueling this
missile. What words of wisdom do you have for the North Koreans at
this moment?MR. WOOD: Well, Matt, I’m not going to comment on, you
know, intelligence matters. But let me just say again, we call on the
North to desist from launching any type of missile. It would be
counterproductive. It’s provocative. It further inflames tensions in
the region. We want to see the North get back to the Six-Party
framework and focus on denuclearization.Yes.QUESTION: Japan has also
said they’re going to call for an emergency meeting in the Security
Council, you know, should this launch go ahead. Is this something that
you would also be looking for?MR. WOOD: Well, let’s see if this test
happens. We certainly hope it doesn’t. Again, calling on the North
not to do it. But certainly, we will – if that test does go forward,
we will be having discussions with our allies.
The code I'm using is as follows:
def parse_item(self, response):
self.log('Hi, this is an item page! %s' % response.url)
hxs = HtmlXPathSelector(response)
speaker = hxs.select("//span[contains(#class, 'official_s_name')]") #gets the speaker
speaker = speaker.select('string()').extract()[0] #extracts speaker text
date = hxs.select('//*[#id="date_long"]') #gets the date
date = date.select('string()').extract()[0] #extracts the date
content = hxs.select('//*[#id="centerblock"]') #gets the content
content = content.select('string()').extract()[0] #extracts the content
texts = "%s\n\n%s\n\n%s" % (date, speaker, content) #puts everything together in a string
filename = ("/path/StateDailyBriefing-" + '%s' ".txt") % (date) #creates a file using the date
#opens the file defined above and writes 'texts' using utf-8
with codecs.open(filename, 'w', encoding='utf-8') as output:
output.write(texts)
I think they problem lies in the formatting of the HTML of the page. On the pages that output the text incorrectly, the paragraphs are separated by <br> <p></p>, while on the pages that output correctly the paragraphs are contained within <p align="left" dir="ltr">. So, while I've identified this, I'm not sure how to make everything output consistently in the correct form.
The problem is that when you getting text() or string(), <br> tags are not converted to newline.
Workaround - replace <br> tags before doing XPath requests. Code:
response = response.replace(body=response.body.replace('<br />', '\n'))
hxs = HtmlXPathSelector(response)
And let me give some advice, if you know, that there is only one node, you can use text() instead string():
date = hxs.select('//*[#id="date_long"]/text()').extract()[0]
Try this xpath:
//*[#id="centerblock"]//text()

Categories

Resources