Django tables2 displays time as am/pm instead of 24H standard - python

I'm currently working on a website in Django, where data from a MySQL database is shown with Django-Tables2. The issue is that all TIME fields in the database gets converted into am/pm instead of 24 hours as the standard should be from what I've gathered. I've seen multiple of people with the opposite issue but none of those solutions has worked. After googling and trying for almost a week, I'm not asking the community.
Is there a way to force the time standard for the TimeFormat fields in Django to correspond to the same format as the MySQL TIME (unlimited/24 hours)?
It would be very helpful as I'm currently displaying time intervals, and 00:25 which should be 25 minutes is shown as 12:25 pm.
class LoggTable(TableReport):
glider = tables.Column(accessor = 'glider.glider_id')
towing = tables.Column(accessor = 'towing.towing_id')
glider_pilot = tables.Column(accessor = 'glider_pilot.pilot_id')
towing_pilot = tables.Column(accessor = 'towing_pilot.pilot_id')
class Meta :
model = FlightData
exclude = ('max_height')
attrs = {'class': 'paleblue'}

We solved the issue, by including
TIME_FORMAT = 'H:i'
In our settings.py file, we tried similar different solutions before but didn't find this until now. We also set
USE_L10N = False

Related

Setting Timestamp in Python w/ timezonedb.com

I have an app that uses timezonedb to grab local timezone information when creating a new post, but I am not sure of the math in order to get the new posts to reflect the timezone where I am. For example, I am currently in South Africa, posted an update to my server (which is using UTC time), and the date/time on the post gives PST. I would love some help with the code here, as it may just be me being bad at math.
At this time UTC: Wed Jan 26 05:33:09 UTC 2022
I made a post with timestampdb info:
timestamp: 1643182360
dst: 0
offset: 7200
The post showed up on my app as 09:33pm yesterday (it was 7:33 am here). I am normally based in California, so I'm not sure if there is something I can do to fix this.
In my Django settings app, I am using "TIME_ZONE = US/Pacific" and "USE_TZ = True"
In my views:
def post(self, request, *args, **kwargs):
if data['timestamp'] != '':
offset = data['tz_offset']
timestamp = data['timestamp']
if timestamp != '' and offset != '':
if int(offset) < 0:
timestamp = int(data['timestamp']) + abs(int(offset))
else:
timestamp = int(data['timestamp']) - abs(int(offset))
naive_time = datetime.datetime.fromtimestamp(int(timestamp))
localtz = pytz.timezone(data['tz_location'])
aware_est = localtz.localize(naive_time)
utc = aware_est.astimezone(pytz.utc)
data['timestamp'] = pytz.timezone(data['tz_location']).localize(
naive_time, is_dst=data['tz_dst'])
else:
data['timestamp'] = datetime.datetime.now()
Is this an issue that I could fix with my settings.py or is it an issue with my views?
A few things:
1643182360 == 2022-01-26T07:32:40Z. Z means UTC, and Unix Timestamps are always in terms of UTC. Thus, your input timestamp is shifted prematurely. Don't try to adjust the timestamp for time zone when you save it - just save the UTC time.
You are doing too much math in your view. In general, any time you find yourself adding or subtracting an offset from a timestamp, you're likely picking a different point in time - not adjusting the time zone. None of that math should be there.
It's a bit unclear what data you are posting at which step and how/why you are using timezonedb.com. You show a tz_location in your code, but not in your data.
If indeed you have a time zone identifier, you don't need either the offset or the DST flag at all. Just convert from UTC directly to that time zone. Let pytz (or dateutil, arrow, or the built-in zoneinfo in Python 3.9+) do the work for you.

How to query with time filters in GoogleScraper?

Even if Google's official API does not offer time information in the query results - even no time filtering for keywords, there is time filtering option in the advanced search:
Google results for stackoverflow in the last one hour
GoogleScraper library offers many flexible options BUT time related ones. How to add time features using the library?
After a bit of inspection, I've found that time Google sends the filtering information by qdr value to the tbs key (possibly means time based search although not officially stated):
https://www.google.com/search?tbs=qdr:h1&q=stackoverflow
This gets the results for the past hour. m and y letters can be used for months and years respectively.
Also, to add sorting by date feature, add the sbd (should mean sort by date) value as well:
https://www.google.com/search?tbs=qdr:h1,sbd:1&q=stackoverflow
I was able to insert these keywords to the BASE Google URL of GoogleScraper. Insert below lines to the end of get_base_search_url_by_search_engine() method (just before return) in scraping.py:
if("google" in str(specific_base_url)):
specific_base_url = "https://www.google.com/search?tbs=qdr:{},sbd:1".format(config.get("time_filter", ""))
Now use the time_filter option in your config:
from GoogleScraper import scrape_with_config
config = {
'use_own_ip': True,
'keyword_file': "keywords.txt",
'search_engines': ['google'],
'num_pages_for_keyword': 2,
'scrape_method': 'http',
"time_filter": "d15" #up to 15 days ago
}
search = scrape_with_config(config)
Results will only include the time range. Additionally, text snippets in the results will have raw date information:
one_sample_result = search.serps[0].links[0]
print(one_sample_result.snippet)
4 mins ago It must be pretty easy - let propertytotalPriceOfOrder =
order.items.map(item => +item.unit * +item.quantity * +item.price);.
where order is your entire json object.

Scraping blog and saving date to database causes DateError: unknown date format

I am working on a project where I scrape a number of blogs, and save a selection of the data to a SQLite database. Such as the title of the post, the date it was posted, and the content of the post.
The goal in the end is to do some fancy textual analyses, but right now I have a problem with writing the data to the database.
I work with the library pattern for Python. (the module about databases can be found here)
I am busy with the third blog now. The data from the two other blogs is already saved in the database, and for the third blog, which is similarly structured, I adapted the code.
There are several functions well integrated with each other, they work fine. I also got access to all the data the right way, when I try it out in IPython Notebook it works fine. When I ran the code as a trial in the Console for only one blog page (it has 43 altogether), it also worked and saved everything nicely in the database. But when I ran it again for 43 pages, it threw a data error.
There are some comments and print statements inside the functions now which I used for debugging. The problem seems to happen in the function parse_post_info, which passes a dictionary on to the function that goes over all blog pages and opens every single post, and then saves the dictionary that the function parse_post_info returns IF it is not None, but I think it IS empty because something about the date format goes wrong.
Also - why does the code work once, and the same code throws a dateerror the second time:
DateError: unknown date format for '2015-06-09T07:01:55+00:00'
Here is the function:
from pattern.db import Database, field, pk, date, STRING, INTEGER, BOOLEAN, DATE, NOW, TEXT, TableError, PRIMARY, eq, all
from pattern.web import URL, Element, DOM, plaintext
def parse_post_info(p):
""" This function receives a post Element from the post list and
returns a dictionary with post url, post title, labels, date.
"""
try:
post_header = p("header.entry-header")[0]
title_tag = post_header("a < h1")[0]
post_title = plaintext(title_tag.content)
print post_title
post_url = title_tag("a")[0].href
date_tag = post_header("div.entry-meta")[0]
post_date = plaintext(date_tag("time")[0].datetime).split("T")[0]
#post_date = date(post_date_text)
print post_date
post_id = int(((p).id).split("-")[-1])
post_content = get_post_content(post_url)
labels = " "
print labels
return dict(blog_no=blog_no,
post_title=post_title,
post_url=post_url,
post_date=post_date,
post_id=post_id,
labels=labels,
post_content=post_content
)
except:
pass
The date() function returns a new Date, a convenient subclass of Python's datetime.datetime. It takes an integer (Unix timestamp), a string or NOW.
You can have diff with local time.
Also the format is "YYYY-MM-DD hh:mm:ss".
The convert time format can be found here

Best practices to filter in Django

In one of the pages of my Django app I have a page that simply displays all employees information in a table:
Like so:
First Name: Last Name: Age: Hire Date:
Bob Johnson 21 03/19/2011
Fred Jackson 50 12/01/1999
Now, I prompt the user for 2 dates and I want to know if an employee was hired between those 2 dates.
For HTTP GET I just render the page and for HTTP POST I'm sending a URL with the variables in the URL.
my urls.py file has these patterns:
('^employees/employees_by_date/$','project.reports.filter_by_date'),
('^employees/employees_by_date/sort/(?P<begin_date>\d+)/(? P<end_date>\d+)/$', EmployeesByDate.as_view()),
And my filter_by_date function looks like this:
def filter_by_date(request):
if request.method == 'GET':
return render(request,"../templates/reports/employees_by_date.html",{'form':BasicPrompt(),})
else:
form = BasicPrompt(request.POST)
if form.is_valid():
begin_date = form.cleaned_data['begin_date']
end_date = form.cleaned_data['end_date']
return HttpResponseRedirect('../reports/employees_by_date/sort/'+str(begin_date)+'/'+str(end_date)+'/')
The code works fine, the problem is I'm new to web dev and this doesn't feel like I'm accomplishing this in the right way. I want to use best practices so can anyone either confirm I am or guide me in the proper way to filter by dates?
Thanks!
You're right, it's a bit awkward to query your API in that way. If you need to add the employee name and something else to the filter, you will end up with a very long URL and it won't be flexible.
Your filter parameters (start and end date) should be added as a query in the url and not be part of path.
In this case, the url would be employees/employees_by_date/?start_date=xxx&end_date=yyy and the dates can be retrieved in the view using start_date = request.GET['start_date].
If a form is used with method='get', the input in the form are automatically converted to a query and appended at the end of the url.
If no form is used, parameters need to be encoded with a function to be able to pass values with special characters like \/ $%.
Use Unix timestamps instead of mm/dd/yyyy dates. A unix timestamp is the number of seconds that have elapsed from Jan 1 1970. ("The Epoch".) So it's just a simple integer number. As I'm writing this, the Unix time is 1432071354.
They aren't very human-readable, but Unix timestamps are unambiguous, concise, and can be filtered for with the simple regex [\d]+.
You'll see lots of APIs around the web use them, for example Facebook. Scroll down to "time based pagination", those numbers are Unix timestamps.
The problem with mm/dd/yyyy dates is ambiguity. Is it mm/dd/yyyy (US)? or dd/mm/yyyy (elsewhere)? What about mm-dd-yyyy?

Query Soundcloud API using created_at filter

Is it possible to use the created_at filter as part of a query in Python? I added it into my query filters, trying several different ways, but it seems to ignore that particular filter. The results that come back contain everything from last week to 3 years ago, and I'm only looking for recent tracks. I have to believe this is doable somehow...
stamp = "2013/07/01 09:24:50 +0000"
tracks = client.get('/tracks', q='Metallica', genre='', duration={
'from': 1800000
}, created_at={
'from': stamp
}, limit='5', tags='Metal')
I've also tried just entering the datetime stamp directly instead of as a variable, with the same results. Am I just botching the code somewhere here? Or can you really not specify the created_at date for your query results?
Yes! It is possible to use the created_at filter as part of a query in Python.
I do not know how the SoundCloud API prioritizes each of the filters, so it is possible that adding more filters may lead to unexpected results.
Limiting the filters simply to the filters {query, created_at} yield your desired result.
# https://github.com/soundcloud/soundcloud-python
import soundcloud
# Authentication
CLIENT_ID = 'insert_client_id_here'
client = soundcloud.Client(client_id=CLIENT_ID)
# Call GET request with parameters
# excludes: {genres,tags,duration}
# includes: {order,limit} for organization
stamp = "2013/07/01 09:24:50 +0000"
tracks = client.get('/tracks',
q='Metallica',
created_at= {'from': stamp},
order='created_at',
limit=5,
)
# Print the results
for i, v in enumerate(tracks):
print i, v.title, v.created_at

Categories

Resources