Python: file oldest date value in a list of objects - python

While I understand that you can get the oldest date in a list of dates by using min(list_of_dates), say I have have a list of dictionaries which contain arbitrary keys that have date values:
[{key1: date1}, {key2: date2}, {key3: date3}]
Is there a built-in method to return the dictionary with the oldest date value? Do I need to iterate over the list, and if so what would that look like?

You can get the minimum date value per dictionary:
min(list_of_dictionaries, key=lambda d: min(d.values()))
This would work with just 1 or with multiple values per dictionary in the list, provided they are all date objects.
Demo:
>>> from datetime import date
>>> import random, string
>>> def random_date(): return date.fromordinal(random.randint(730000, 740000))
...
>>> def random_key(): return ''.join([random.choice(string.ascii_lowercase) for _ in range(10)])
...
>>> list_of_dictionaries = [{random_key(): random_date() for _ in range(random.randint(1, 3))} for _ in range(5)]
>>> list_of_dictionaries
[{'vsiaffoloi': datetime.date(2018, 1, 3)}, {'omvhscpvqg': datetime.date(2020, 10, 7), 'zyvrtvptuw': datetime.date(2001, 7, 25), 'hvcjgsiicz': datetime.date(2019, 11, 30)}, {'eoltbkssmj': datetime.date(2016, 2, 27), 'xqflazzvyv': datetime.date(2024, 9, 1), 'qaszxzxbsg': datetime.date(2014, 11, 26)}, {'noydyjtmjf': datetime.date(2013, 6, 4), 'okieejoiay': datetime.date(2020, 12, 15), 'ddcqoxkpdn': datetime.date(2002, 7, 13)}, {'vbwstackcq': datetime.date(2025, 12, 14)}]
>>> min(list_of_dictionaries, key=lambda d: min(d.values()))
{'omvhscpvqg': datetime.date(2020, 10, 7), 'zyvrtvptuw': datetime.date(2001, 7, 25), 'hvcjgsiicz': datetime.date(2019, 11, 30)}
or just one value per dictionary:
>>> list_of_dictionaries = [{random_key(): random_date()} for _ in range(5)]
>>> list_of_dictionaries
[{'vmlrfbyybp': datetime.date(2001, 10, 25)}, {'tvenffnapv': datetime.date(2003, 1, 1)}, {'ivypocbyuz': datetime.date(2026, 8, 9)}, {'trywaosiqm': datetime.date(2022, 7, 29)}, {'ndqmejmfqj': datetime.date(2001, 2, 13)}]
>>> min(list_of_dictionaries, key=lambda d: min(d.values()))
{'ndqmejmfqj': datetime.date(2001, 2, 13)}

Per the official docs, min supports an arbitrary key function to specify what to compare on. If you require more specific behavior, you may also consider sorted instead.

Related

Output for nested list comprehension as empty list instead of list with None

I have a nested list which I am trying to parse into datetime objects.
Some strings have invalid formats which should be an empty list in the output.
input = [
['20210804:1700', '20210805:1600'],
['20210805:1700', '20210806:1600'],
['20210807:CLOSED']]
So far I have the following.
def _helper(x):
try:
return dt.datetime.strptime(x, '%Y%m%d:%H%M')
except Exception:
return
output = [[_helper(i) for i in group] for group in input]
Currently the output is as follows
[[datetime.datetime(2021, 8, 4, 17, 0), datetime.datetime(2021, 8, 5, 16, 0)],
[datetime.datetime(2021, 8, 5, 17, 0), datetime.datetime(2021, 8, 6, 16, 0)],
[None]]
I would like the output to have an empty list [] instead of [None].
How can I do that?
Thanks
output = [[_helper(i) for i in group if _helper(i) is not None] for group in input]
from nltk.util import flatten
def _helper(x):
try:
return dt.datetime.strptime(x, '%Y%m%d:%H%M')
except Exception:
return []
output = [flatten([_helper(i) for i in group]) for group in input]
You could use a nested list comprehension to filter out None values from the generated list.
import datetime as dt
input = [
['20210804:1700', '20210805:1600'],
['20210805:1700', '20210806:1600'],
['20210807:CLOSED']]
def _helper(x):
try:
return dt.datetime.strptime(x, '%Y%m%d:%H%M')
except Exception:
return
output = [[x for x in [_helper(i) for i in group] if x is not None]
for group in input]
print(output)
Outputs...
[
[datetime.datetime(2021, 8, 4, 17, 0), datetime.datetime(2021, 8, 5, 16, 0)],
[datetime.datetime(2021, 8, 5, 17, 0), datetime.datetime(2021, 8, 6, 16, 0)],
[]
]

How to create a nested list conditioned on a parameter in python

I have generated a day-wise nested list and want to calculate total duration between login and logout sessions and store that value individually in a duration nested list, organized by the day in which the login happened.
My python script is:
import datetime
import itertools
Logintime = [
datetime.datetime(2021,1,1,8,10,10),
datetime.datetime(2021,1,1,10,25,19),
datetime.datetime(2021,1,2,8,15,10),
datetime.datetime(2021,1,2,9,35,10)
]
Logouttime = [
datetime.datetime(2021,1,1,10,10,11),
datetime.datetime(2021,1,1,17,0,10),
datetime.datetime(2021,1,2,9,30,10),
datetime.datetime(2021,1,2,17,30,12)
]
Logintimedaywise = [list(group) for k, group in itertools.groupby(Logintime,
key=datetime.datetime.toordinal)]
Logouttimedaywise = [list(group) for j, group in itertools.groupby(Logouttime,
key=datetime.datetime.toordinal)]
print(Logintimedaywise)
print(Logouttimedaywise)
# calculate total duration
temp = []
l = []
for p,q in zip(Logintimedaywise,Logouttimedaywise):
for a,b in zip(p, q):
tdelta = (b-a)
diff = int(tdelta.total_seconds()) / 3600
if diff not in temp:
temp.append(diff)
l.append(temp)
print(l)
this script generating the following output (the duration in variable l is coming out as a flat list inside a singleton list):
[[datetime.datetime(2021, 1, 1, 8, 10, 10), datetime.datetime(2021, 1, 1, 10, 25, 19)], [datetime.datetime(2021, 1, 2, 8, 15, 10), datetime.datetime(2021, 1, 2, 9, 35, 10)]]
[[datetime.datetime(2021, 1, 1, 10, 10, 11), datetime.datetime(2021, 1, 1, 17, 0, 10)], [datetime.datetime(2021, 1, 2, 9, 30, 10), datetime.datetime(2021, 1, 2, 17, 30, 12)]]
[[2.000277777777778, 6.5808333333333335, 1.25, 7.917222222222223]]
But my desired output format is the following nested list of durations (each item in the list should be the list of durations for a given login day):
[[2.000277777777778, 6.5808333333333335] , [1.25, 7.917222222222223]]
anyone can help how can i store total duration as a nested list according to the login day?
thanks in advance.
Try changing this peace of code:
# calculate total duration
temp = []
l = []
for p,q in zip(Logintimedaywise,Logouttimedaywise):
for a,b in zip(p, q):
tdelta = (b-a)
diff = int(tdelta.total_seconds()) / 3600
if diff not in temp:
temp.append(diff)
l.append(temp)
print(l)
To:
# calculate total duration
l = []
for p,q in zip(Logintimedaywise,Logouttimedaywise):
l.append([])
for a,b in zip(p, q):
tdelta = (b-a)
diff = int(tdelta.total_seconds()) / 3600
if diff not in l[-1]:
l[-1].append(diff)
print(l)
Then the output would be:
[[datetime.datetime(2021, 1, 1, 8, 10, 10), datetime.datetime(2021, 1, 1, 10, 25, 19)], [datetime.datetime(2021, 1, 2, 8, 15, 10), datetime.datetime(2021, 1, 2, 9, 35, 10)]]
[[datetime.datetime(2021, 1, 1, 10, 10, 11), datetime.datetime(2021, 1, 1, 17, 0, 10)], [datetime.datetime(2021, 1, 2, 9, 30, 10), datetime.datetime(2021, 1, 2, 17, 30, 12)]]
[[2.000277777777778, 6.5808333333333335], [1.25, 7.917222222222223]]
I add a new sublist for every iteration.
Your solution and the answer by #U11-Forward will break if login and logout for the same session happen in different days, since the inner lists in Logintimedaywise and Logouttimedaywise will have different number of elements.
To avoid that, a way simpler solution is if you first calculate the duration for all pairs of login, logout, then you create the nested lists based only on the login day (or logout day if you wish), like this:
import datetime
import itertools
import numpy
# define the login and logout times
Logintime = [datetime.datetime(2021,1,1,8,10,10),datetime.datetime(2021,1,1,10,25,19),datetime.datetime(2021,1,2,8,15,10),datetime.datetime(2021,1,2,9,35,10)]
Logouttime = [datetime.datetime(2021,1,1,10,10,11),datetime.datetime(2021,1,1,17,0,10), datetime.datetime(2021,1,2,9,30,10),datetime.datetime(2021,1,2,17,30,12) ]
# calculate the duration and the unique days in the set
duration = [ int((logout - login).total_seconds())/3600 for login,logout in zip(Logintime,Logouttime) ]
login_days = numpy.unique([login.day for login in Logintime])
# create the nested list of durations
# each inner list correspond to a unique login day
Logintimedaywise = [[ login for login in Logintime if login.day == day ] for day in login_days ]
Logouttimedaywise = [[ logout for login,logout in zip(Logintime,Logouttime) if login.day == day ] for day in login_days ]
duration_daywise = [[ d for d,login in zip(duration,Logintime) if login.day == day ] for day in login_days ]
# check
print(Logintimedaywise)
print(Logouttimedaywise)
print(duration_daywise)
Outputs
[[datetime.datetime(2021, 1, 1, 8, 10, 10), datetime.datetime(2021, 1, 1, 10, 25, 19)], [datetime.datetime(2021, 1, 2, 8, 15, 10), datetime.datetime(2021, 1, 2, 9, 35, 10)]]
[[datetime.datetime(2021, 1, 1, 10, 10, 11), datetime.datetime(2021, 1, 1, 17, 0, 10)], [datetime.datetime(2021, 1, 2, 9, 30, 10), datetime.datetime(2021, 1, 2, 17, 30, 12)]]
[[2.000277777777778, 6.5808333333333335], [1.25, 7.917222222222223]]

Dict values incrementation in succession validation

I have a following task:
There is a dict like:
{1: datetime.date(2020, 7, 2), 2: datetime.date(2020, 7, 2), 11: datetime.date(2021, 7, 2)}
and it should follow one rule:
datetime object in value of each following element in dict should be >= than previous element ‘s value in dict after dict is sorted by keys. Result should be bool of whether this rule is violated or not.
Examples:
correct (each following date gte than previous one)
{1: datetime.date(2020, 7, 2), 2: datetime.date(2020, 7, 2), 11: datetime.date(2021, 7, 2)}
incorrect (2017 < than 2020 in previous element)
{1: datetime.date(2020, 7, 2), 2: datetime.date(2020, 7, 2), 11: datetime.date(2017, 7, 2)}
What I do to validate proper order:
import more_itertools
# sort dict by keys
sorted_by_keys_numbers = dict(sorted(original_dict.items()))
# check if rule is violated or not
are_dates_sorted = more_itertools.is_sorted(sorted_by_keys_numbers .values())
# returns True or False
But it heavily function based and non-pythonic
Is it any alternative, that are:
Pythonic
2)Non massive with multiple levels of nested FOR, IF, etc
Thank you
P>S Python 3.8, so all element are mantain order
I would try something like this with an expression generator and a zip on two list built from dictionary value
from datetime import datetime
data = {1: datetime(2020, 7, 2),
2: datetime(2020, 7, 2),
11: datetime(2021, 7, 4)}
res = all(i <= j for i, j in zip(list(data.values()), list(data.values())[1:]))

Value difference comparison within a list in python

I have a nested list that contains different variables in it. I am trying to check the difference value between two consecutive items, where if a condition match, group these items together.
i.e.
Item 1 happened on 1-6-2012 1 pm
Item 2 happened on 1-6-2012 4 pm
Item 3 happened on 1-6-2012 6 pm
Item 4 happened on 3-6-2012 5 pm
Item 5 happened on 5-6-2012 5 pm
I want to group the items that have gaps less than 24 Hours. In this case, Items 1, 2 and 3 belong to a group, Item 4 belong to a group and Item 5 belong to another group. I tried the following code:
Time = []
All_Traps = []
Traps = []
Dic_Traps = defaultdict(list)
Traps_CSV = csv.reader(open("D:/Users/d774911/Desktop/Telstra Internship/Working files/Traps_Generic_Features.csv"))
for rows in Traps_CSV:
All_Traps.append(rows)
All_Traps.sort(key=lambda x: x[9])
for length in xrange(len(All_Traps)):
if length == (len(All_Traps) - 1):
break
Node_Name_1 = All_Traps[length][2]
Node_Name_2 = All_Traps[length + 1][2]
Event_Type_1 = All_Traps[length][5]
Event_Type_2 = All_Traps[length + 1][5]
Time_1 = All_Traps[length][9]
Time_2 = All_Traps[length + 1][9]
Difference = datetime.strptime(Time_2[0:19], '%Y-%m-%dT%H:%M:%S') - datetime.strptime(Time_1[0:19], '%Y-%m-%dT%H:%M:%S')
if Node_Name_1 == Node_Name_2 and \
Event_Type_1 == Event_Type_2 and \
float(Difference.seconds) / (60*60) < 24:
Dic_Traps[length].append(All_Traps[Length])
But I am missing some items. Ideas?
For sorted list you may use groupby. Here is a simplified example (you should convert your date strings to datetime objects), it should give the main idea:
from itertools import groupby
import datetime
SRC_DATA = [
(1, datetime.datetime(2015, 06, 20, 1)),
(2, datetime.datetime(2015, 06, 20, 4)),
(3, datetime.datetime(2015, 06, 20, 5)),
(4, datetime.datetime(2015, 06, 21, 1)),
(5, datetime.datetime(2015, 06, 22, 1)),
(6, datetime.datetime(2015, 06, 22, 4)),
]
for group_date, group in groupby(SRC_DATA, key=lambda X: X[1].date()):
print "Group {}: {}".format(group_date, list(group))
Output:
$ python python_groupby.py
Group 2015-06-20: [(1, datetime.datetime(2015, 6, 20, 1, 0)), (2, datetime.datetime(2015, 6, 20, 4, 0)), (3, datetime.datetime(2015, 6, 20, 5, 0))]
Group 2015-06-21: [(4, datetime.datetime(2015, 6, 21, 1, 0))]
Group 2015-06-22: [(5, datetime.datetime(2015, 6, 22, 1, 0)), (6, datetime.datetime(2015, 6, 22, 4, 0))]
First of all, change those horrible cased variable names. Python has its own convention of naming variables, classes, methods and so on. It's called snake case.
Now, on to what you need to do:
import datetime as dt
import pprint
ts_dict = {}
with open('timex.dat', 'r+') as f:
for line in f.read().splitlines():
if line:
item = line.split('happened')[0].strip().split(' ')[1]
timestamp_string = line.split('on')[-1].split('pm')[0]
datetime_stamp = dt.datetime.strptime(timestamp_string.strip(), "%d-%m-%Y %H")
ts_dict[item] = datetime_stamp
This is a hackish way of giving you this:
item_timestamp_dict= {
'1': datetime.datetime(2012, 6, 1, 1, 0),
'2': datetime.datetime(2012, 6, 1, 4, 0),
'3': datetime.datetime(2012, 6, 1, 6, 0),
'4': datetime.datetime(2012, 6, 3, 5, 0),
'5': datetime.datetime(2012, 6, 5, 5, 0)}
A dictionary of item # as key, and their datetime timestamp as value.
You can use the datetime timestamp values' item_timestamp_dict['1'].hour values to do your calculation.
EDIT: It can be optimized a lot.

Group together arbitrary date objects that are within a time range of each other

I want to split the calendar into two-week intervals starting at 2008-May-5, or any arbitrary starting point.
So I start with several date objects:
import datetime as DT
raw = ("2010-08-01",
"2010-06-25",
"2010-07-01",
"2010-07-08")
transactions = [(DT.datetime.strptime(datestring, "%Y-%m-%d").date(),
"Some data here") for datestring in raw]
transactions.sort()
By manually analyzing the dates, I am quite able to figure out which dates fall within the same fortnight interval. I want to get grouping that's similar to this one:
# Fortnight interval 1
(datetime.date(2010, 6, 25), 'Some data here')
(datetime.date(2010, 7, 1), 'Some data here')
(datetime.date(2010, 7, 8), 'Some data here')
# Fortnight interval 2
(datetime.date(2010, 8, 1), 'Some data here')
import datetime as DT
import itertools
start_date=DT.date(2008,5,5)
def mkdate(datestring):
return DT.datetime.strptime(datestring, "%Y-%m-%d").date()
def fortnight(date):
return (date-start_date).days //14
raw = ("2010-08-01",
"2010-06-25",
"2010-07-01",
"2010-07-08")
transactions=[(date,"Some data") for date in map(mkdate,raw)]
transactions.sort(key=lambda (date,data):date)
for key,grp in itertools.groupby(transactions,key=lambda (date,data):fortnight(date)):
print(key,list(grp))
yields
# (55, [(datetime.date(2010, 6, 25), 'Some data')])
# (56, [(datetime.date(2010, 7, 1), 'Some data'), (datetime.date(2010, 7, 8), 'Some data')])
# (58, [(datetime.date(2010, 8, 1), 'Some data')])
Note that 2010-6-25 is in the 55th fortnight from 2008-5-5, while 2010-7-1 is in the 56th. If you want them grouped together, simply change start_date (to something like 2008-5-16).
PS. The key tool used above is itertools.groupby, which is explained in detail here.
Edit: The lambdas are simply a way to make "anonymous" functions. (They are anonymous in the sense that they are not given names like functions defined by def). Anywhere you see a lambda, it is also possible to use a def to create an equivalent function. For example, you could do this:
import operator
transactions.sort(key=operator.itemgetter(0))
def transaction_fortnight(transaction):
date,data=transaction
return fortnight(date)
for key,grp in itertools.groupby(transactions,key=transaction_fortnight):
print(key,list(grp))
Use itertools groupby with lambda function to divide by the length of period the distance from starting point.
>>> for i, group in groupby(range(30), lambda x: x // 7):
print list(group)
[0, 1, 2, 3, 4, 5, 6]
[7, 8, 9, 10, 11, 12, 13]
[14, 15, 16, 17, 18, 19, 20]
[21, 22, 23, 24, 25, 26, 27]
[28, 29]
So with dates:
import itertools as it
start = DT.date(2008,5,5)
lenperiod = 14
for fnight,info in it.groupby(transactions,lambda data: (data[0]-start).days // lenperiod):
print list(info)
You can use also weeknumbers from strftime, and lenperiod in number of weeks:
for fnight,info in it.groupby(transactions,lambda data: int (data[0].strftime('%W')) // lenperiod):
print list(info)
Using a pandas DataFrame with resample works too. Given OP's data, but change "some data here" to 'abcd'.
>>> import datetime as DT
>>> raw = ("2010-08-01",
... "2010-06-25",
... "2010-07-01",
... "2010-07-08")
>>> transactions = [(DT.datetime.strptime(datestring, "%Y-%m-%d"), data) for
... datestring, data in zip(raw,'abcd')]
[(datetime.datetime(2010, 8, 1, 0, 0), 'a'),
(datetime.datetime(2010, 6, 25, 0, 0), 'b'),
(datetime.datetime(2010, 7, 1, 0, 0), 'c'),
(datetime.datetime(2010, 7, 8, 0, 0), 'd')]
Now try using pandas. First create a DataFrame, naming the columns and setting the indices to the dates.
>>> import pandas as pd
>>> df = pd.DataFrame(transactions,
... columns=['date','data']).set_index('date')
data
date
2010-08-01 a
2010-06-25 b
2010-07-01 c
2010-07-08 d
Now use the Series Offset Aliases to every 2 weeks starting on Sundays and concatenate the results.
>>> fortnight = df.resample('2W-SUN').sum()
data
date
2010-06-27 b
2010-07-11 cd
2010-07-25 0
2010-08-08 a
Now drill into the data as needed by weekstart
>>> fortnight.loc['2010-06-27']['data']
b
or index
>>> fortnight.iloc[0]['data']
b
or indices
>>> data = fortnight.iloc[:2]['data']
b
date
2010-06-27 b
2010-07-11 cd
Freq: 2W-SUN, Name: data, dtype: object
>>> data[0]
b
>>> data[1]
cd

Categories

Resources