Consider the following:
I'm making a secret santa script for my family for next year using python, and although I have figured out how to draw names (see code below), I need to execute it just once. I'm planning to use the flask framework to create a page counts down the days until the draw (which I've figured out as well), but how do i make it so it does the draw, just once, and not everytime somebody logs on to that page? Get what I mean?
Anyway, I'll show you my code here:
# imports & initial Flask set up above
def do_matchup(names, draw, matches=None):
if matches is None:
matches = []
while names:
member = names.pop()
recipient = choice(draw)
if recipient != member:
matches.append("%s=%s" % (member, recipient))
draw.remove(recipient)
else:
names.append(member)
return matches
#app.route("/")
def index():
now = dt.datetime.now()
draw_date = dt.datetime.strptime('10/1/2013', '%m/%d/%Y')
days = draw_date - now
family = ["member1", "member2", "member3", "member4", "member5", "Me"]
hat_names = [name for name in family]
matchup = do_matchup(family, hat_names)
return render_template("base.html", now=now,
draw_date=draw_date,
days=days,
matchup=matchup)
The template is a basic html page that says {% if now < draw_date %} "There are x-amount of days until the draw", {% else %} "Show the draw results".
Every time the page is loaded, it does a new draw. How do I make it so it just does the draw once and not give every family member different results?
You need to have a separate way of running the shuffle - e.g. by visiting a particular page. That code should also save the shuffle to the database. Another way is to check if there is already a shuffle for the current year, and only to generate and save it if not.
Fundamentally, the answer is to save your shuffle in e.g. a database.
If you are not using a database and just need this small amount of saved results, you could just pickle the matchup to a file. Then in your index you just check if the file exists and if so, read the file and return that matchup. Otherwise you generate a new matchup, save it, and return results.
Related
I'm writing a streamlit app where I read data from a nosql database
I need to find all value of my collection of data (26000)
I do this by a pymongo query when I run the app.
Very well.... I've tried this with a 2000+ rows and I have no problems, when I change collection such I said above, streamlit re-run in loop and web socket raise an error.
Is there a way to get a huge data from database and represent it on streamlit app as a dataframe without raise exception?
I have increase the size of the message by modifying the config setting for server.maxMessageSize but still dont work
It might be helpful if you could include at least the error message itself to help narrow down the specific issue. From what I can tell, though, it seems like the overarching issue is that you're trying to send more data to Streamlit than is permitted by its max message size value. I'm assuming "I have increase the size of the message" means that you've tried modifying the config setting for server.maxMessageSize per the Streamlit configuration guide, but if that is not the case then you should give that a try.
As you've got a huge amount of data, it might be best to give some form of pagination a try. That is: instead of sending the entire dataframe to Streamlit, you would instead send a section of it (a "page") and provide some way of switching between "pages", such as in the form of "previous page"/"next page" buttons. As an example, assuming you're loading your data in as a Pandas dataframe and have Streamlit imported as st, you could try encapsulating pagination into a function like this:
def paginate_dataframe(df, page_size: int):
# Determine the maximum number of pages we should experience
from math import ceil
n_pages = ceil(len(df)/page_size)
last_page = n_pages - 1
# Get current page, initializing the state of the value if necessary
current_page = st.session_state.setdefault("current_page", 0)
# Define callback functions for the previous/next buttons so they can adjust the current page
def decrement_page():
if st.session_state.current_page > 0:
st.session_state.current_page -= 1
def increment_page():
if st.session_state.current_page < last_page:
st.session_state.current_page += 1
# Create previous/next buttons which will be disabled when on the first or last pages, respectively
button_columns = st.columns(2)
with button_columns[0]:
st.button(
label="previous page",
on_click=decrement_page,
disabled=(current_page <= 0),
)
with button_columns[1]:
st.button(
label="next page",
on_click=increment_page,
disabled=(current_page >= last_page),
)
page_start = current_page * page_size
st.dataframe(df.iloc[page_start: page_start+page_size])
paginate_dataframe(df, page_size=5)
I'm trying to create a Python program to create Spotify playlists according to the user's "mood"; that is, the user chooses a specific mood and the program returns a playlist where the song's (taken from the user's saved tracks) features indicate that they match said mood. However, I'm running into the issue that the playlist is made up of the same song repeated 30 times, instead of 30 different songs. I'm fairly new to Python and programming in general, so maybe this isn't a very difficult problem, but I'm trying to see the issue and I am unable to.
Here is the route in my main code that refers to this issue (my full program has more routes, which all work fine so far). All other moods will be defined by other routes, but following the same logic. Any extra code that may be necessary in order to understand the issue better, I'll provide.
#app.route("/playlist_mood_angry")
def mood_playlist_angry():
# define mood and create empty track list
selected_mood = "Angry"
tracks_uris = [] # we need all the tracks uri to add to the future playlist
if 'auth_header' in session:
auth_header = session['auth_header']
# get user profile and saved tracks
user_profile_data = spotify.get_users_profile(auth_header)
user_id = user_profile_data["id"]
saved_tracks_data = spotify.get_user_saved_tracks(auth_header)
playlist_name = 'CSMoodlet: Angry'
playlist_description = "A playlist for when you're just pissed off and want a soundtrack to go with it. Automatically curated by CSMoodlet."
# go through saved tracks dictionary, get the tracks and for each one check if features average matches selected mood
for item in saved_tracks_data["items"]:
track = item["track"]
features = sp.audio_features(track['id'])
acousticness = features[0]['acousticness']
danceability = features[0]['danceability']
energy = features[0]['energy']
speechiness = features[0]['speechiness']
valence = features[0]['valence']
track_mood = spotify.define_mood(acousticness, danceability, energy, speechiness, valence)
# if the track's mood is "angry", if the list is not 30 tracks long yet, append it to the list
if track_mood == "Angry": #if track's mood is not Angry, it will go back to the for loop to check the next one (THIS DOESN'T WORK)
while len(tracks_uris) < 30:
track_uri = "spotify:track:{}".format(track['id'])
tracks_uris.append(track_uri)
# once it has gone through all saved tracks, create the playlist and add the tracks
new_playlist = spotify.create_playlist(auth_header, user_id, playlist_name, playlist_description)
new_playlist_id = new_playlist['id']
added_playlist = spotify.add_tracks_to_playlist(auth_header, new_playlist_id, tracks_uris)
playlist = spotify.get_playlist(auth_header, new_playlist_id)
tracks_data = []
for item in playlist["items"]:
track_data = item["track"]
tracks_data.append(track_data)
return render_template("created_playlist.html", selected_mood = selected_mood, playlist_tracks=tracks_data)
Any help would be deeply appreciated. Apologies if anything is badly explained, I am new to Stackoverflow and English is not my first language.
Found the error!
For anyone having issues with the same problem, I found that it was actually very simple. just substitute the while loop by an if loop, when checking the length of the playlist is less than 30.
On the Soundcloud API guide (https://developers.soundcloud.com/docs/api/guide#pagination)
the example given for reading more than 100 piece of data is as follows:
# get first 100 tracks
tracks = client.get('/tracks', order='created_at', limit=page_size)
for track in tracks:
print track.title
# start paging through results, 100 at a time
tracks = client.get('/tracks', order='created_at', limit=page_size,
linked_partitioning=1)
for track in tracks:
print track.title
I'm pretty certain this is wrong as I found that 'tracks.collection' needs referencing rather than just 'tracks'. Based on the GitHub python soundcloud API wiki it should look more like this:
tracks = client.get('/tracks', order='created_at',limit=10,linked_partitioning=1)
while tracks.collection != None:
for track in tracks.collection:
print(track.playback_count)
tracks = tracks.GetNextPartition()
Where I have removed the indent from the last line (I think there is an error on the wiki it is within the for loop which makes no sense to me). This works for the first loop. However, this doesn't work for successive pages because the "GetNextPartition()" function is not found. I've tried the last line as:
tracks = tracks.collection.GetNextPartition()
...but no success.
Maybe I'm getting versions mixed up? But I'm trying to run this with Python 3.4 after downloading the version from here: https://github.com/soundcloud/soundcloud-python
Any help much appreciated!
For anyone that cares, I found this solution on the SoundCloud developer forum. It is slightly modified from the original case (searching for tracks) to list my own followers. The trick is to call the client.get function repeatedly, passing the previously returned "users.next_href" as the request that points to the next page of results. Hooray!
pgsize=200
c=1
me = client.get('/me')
#first call to get a page of followers
users = client.get('/users/%d/followers' % me.id, limit=pgsize, order='id',
linked_partitioning=1)
for user in users.collection:
print(c,user.username)
c=c+1
#linked_partitioning means .next_href exists
while users.next_href != None:
#pass the contents of users.next_href that contains 'cursor=' to
#locate next page of results
users = client.get(users.next_href, limit=pgsize, order='id',
linked_partitioning=1)
for user in users.collection:
print(c,user.username)
c=c+1
I'm new to Python and I'm trying to make a simple bulletin board system app using web2py. I am trying to add a post into a certain board and I linked the post and board by including the following field in my post table: Field('board_id', db.board). When I try to create a post inside a particular board it gives me an error: "OperationalError: no such column: board.id". My code for create_posts:
def add_post():
board = db.board(request.args(0))
form = SQLFORM(db.post)
db.pst.board_id.default = db.board.id
if form.process().accepted:
session.flash = T('The data was inserted')
redirect(URL('default', 'index'))
return dict(form=form, board=board)
When I try to do {{=board}} on the page that shows the posts in a certain board, I get Row {'name': 'hi', 'id': 1L, 'pst': Set (pst.board_id = 1), 'description': 'hi'} so I know it's there in the database. But when I do the same thing for the "add post" form page, it says "board: None". I'm extremely confused, please point me in the right direction!
There appear to be several problems with your function. First, you are assigning the default value of the board_id field to be a Field object (i.e., db.board.id) rather than an actual id value (e.g., board.id). Second, any default values should be assigned before creating the SQLFORM.
Finally, you pass db.post to SQLFORM, but in the next line, the post table appears to be called db.pst -- presumably these are not two separate tables and one is just a typo.
Regarding the issue of {{=board}} displaying None, that indicates that board = db.board(request.args(0)) is not retrieving a record, which would be due to request.args(0) itself being None or being a value that does not match any record id in db.board. You should check how you are generating the links that lead to add_post and confirm that there is a valid db.board id in the first URL arg. In any case, it might be a good idea to detect when there is no valid board record and either redirect or display an error message.
So, your function should look something like this:
def add_post():
board = db.board(request.args(0)) or redirect(URL('default', 'index'))
db.pst.board_id.default = board.id
form = SQLFORM(db.pst)
if form.process(next=URL('default', 'index'),
message_onsuccess=T('The data was inserted'))
return dict(form=form, board=board)
Note, if your are confident that links to add_post will include valid board IDs, then you can eliminate the first line altogether, as there is no reason to retrieve a record based on its ID if the only field you need from it is the ID (which you already have). Instead, the second line could be:
db.pst.board_id.default = request.args(0) or redirect(URL('default', 'index'))
I am working on a project where I scrape a number of blogs, and save a selection of the data to a SQLite database. Such as the title of the post, the date it was posted, and the content of the post.
The goal in the end is to do some fancy textual analyses, but right now I have a problem with writing the data to the database.
I work with the library pattern for Python. (the module about databases can be found here)
I am busy with the third blog now. The data from the two other blogs is already saved in the database, and for the third blog, which is similarly structured, I adapted the code.
There are several functions well integrated with each other, they work fine. I also got access to all the data the right way, when I try it out in IPython Notebook it works fine. When I ran the code as a trial in the Console for only one blog page (it has 43 altogether), it also worked and saved everything nicely in the database. But when I ran it again for 43 pages, it threw a data error.
There are some comments and print statements inside the functions now which I used for debugging. The problem seems to happen in the function parse_post_info, which passes a dictionary on to the function that goes over all blog pages and opens every single post, and then saves the dictionary that the function parse_post_info returns IF it is not None, but I think it IS empty because something about the date format goes wrong.
Also - why does the code work once, and the same code throws a dateerror the second time:
DateError: unknown date format for '2015-06-09T07:01:55+00:00'
Here is the function:
from pattern.db import Database, field, pk, date, STRING, INTEGER, BOOLEAN, DATE, NOW, TEXT, TableError, PRIMARY, eq, all
from pattern.web import URL, Element, DOM, plaintext
def parse_post_info(p):
""" This function receives a post Element from the post list and
returns a dictionary with post url, post title, labels, date.
"""
try:
post_header = p("header.entry-header")[0]
title_tag = post_header("a < h1")[0]
post_title = plaintext(title_tag.content)
print post_title
post_url = title_tag("a")[0].href
date_tag = post_header("div.entry-meta")[0]
post_date = plaintext(date_tag("time")[0].datetime).split("T")[0]
#post_date = date(post_date_text)
print post_date
post_id = int(((p).id).split("-")[-1])
post_content = get_post_content(post_url)
labels = " "
print labels
return dict(blog_no=blog_no,
post_title=post_title,
post_url=post_url,
post_date=post_date,
post_id=post_id,
labels=labels,
post_content=post_content
)
except:
pass
The date() function returns a new Date, a convenient subclass of Python's datetime.datetime. It takes an integer (Unix timestamp), a string or NOW.
You can have diff with local time.
Also the format is "YYYY-MM-DD hh:mm:ss".
The convert time format can be found here