I am saving the day and time period of some scheduled tasks in a txt in this format:
Monday,10:50-11:32
Friday,18:33-18:45
Sunday,17:10-17:31
Sunday,14:10-15:11
Friday,21:10-23:11
I am opening the txt and get the contents in a list.
How can I sort the list to get the days and the time periods in order?
Like this:
Monday,10:50-11:32
Friday,18:33-18:45
Friday,21:10-23:11
Sunday,14:10-15:11
Sunday,17:10-17:31
Ok let's say you only have dayofweek and the timestamps. One alternative is to calculate the amount of minutes each item is (Monday 00:00 = 0 minutes and Sunday 23:59 = max minutes) and sort with that function.
The example below sorts with the first timestamp value. One comment from a fellow SO:er pointed out that this does not include the second timestamp (end-time). To include this we can add a decimal value by inverting the amount of minutes per day.
((int(h2)* 60 + int(m2))/24*60) # minutes divided by maximum minutes per day gives a decimal number
However the key here is the following code:
weekday[day]*24*60 + int(h1)*60 + int(m1) # gets the total minutes passed, we sort with this!
And of course the sort function with a join (double-break line). When you pass a key to sorted() and that key is a function the sorting will be based on the return values of that function (which is the amount of minutes).
'\n\n'.join(sorted(list_, key=get_min))
Enough text... let's jump to a full example updated version
import io
file= """Monday,10:50-11:32
Friday,18:33-18:45
Sunday,17:10-17:31
Sunday,17:10-15:11
Friday,21:10-23:11"""
list_ = [i.strip('\n') for i in io.StringIO(file).readlines() if i != "\n"]
weekday = dict(zip(["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"],[0,1,2,3,4,5,6]))
def get_min(time_str):
day,time = time_str.split(",")
h1, m1 = time.split('-')[0].split(":")
h2, m2 = time.split('-')[1].split(":")
return weekday[day]*24*60 + int(h1)*60 + int(m1) + ((int(h2)* 60 + int(m2))/24*60)
with open("output.txt", "w") as outfile:
outfile.write('\n\n'.join(sorted(list_, key=get_min)))
print('\n\n'.join(sorted(list_, key=get_min)))
Creates "output.txt" with:
Monday,10:50-11:32
Friday,18:33-18:45
Friday,21:10-23:11
Sunday,17:10-15:11
Sunday,17:10-17:31
Related
Relative newbie with Python and Pandas, finally admitting defeat on not being able to figure this out myself. I have a pandas Dataframe from our energy suppliers API, each row is a 30min interval showing wholesale energy costs in p/kWH 'value_exc_vat', the solar output for the house 'export' and a datetime stamp 'datetime'.
| index |'value_exc_vat'|'datetime'|'export'|'hour'|'export_rate'|'export_rate_var'|
'hour' is taken from datetime for each row e.g. 13, 14, 15, 16, etc.
To calculate the price/kWh we are paid i need to calculate
0.97 x 'value_exc_vat' + peak_rate_uplift
peak_rate_uplift is only applied during the hours 16:19 inclusive
I've tried just about every method i can think of but i can't get this to work.
peak_rate = [16,17,18,19]
for hour in df['hour']:
if hour == peak_rate:
df['export_rate_var'] = (df['export_rate'] + peak_rate_uplift)
else:
df['export_rate_var'] = df['export_rate']
Printing the output from the if function i can see that 'hour' is being selected for the correct values but the remainder of the statement doesn't then add the peak_rate_uplift I would expect.
Any advice or help on how to apply the addition to the selected row would be appreciated, feels like it should be something simple but I've been at this for 3 days now...
You could use:
peak_rate = [16,17,18,19]
df['export_rate_var'] = (df['export_rate'] + df.hour.isin(peak_rate) * peak_rate_uplift)
Where df.hour.isin([peak_rate]) returns a boolean series. This multiplied with the integer peak_rate_uplift gives a Series of integers which is 0 where the hour is not in the peak rate hours.
Does this work:
peak_rate = [16,17,18,19]
for i in range(len(df)):
if df.hour.iloc[i].isin(peak_rate):
df['export_rate_var'] = (df['export_rate'] + peak_rate_uplift)
else:
df['export_rate_var'] = df['export_rate']
I'm working on some code to manipulate hourly and daily data for a year and am a little confused about how to combine data from the two files. What I am doing is using the hourly pattern of Data Set B but scaling it using Daily Set A. ... so in essence (using the example below) I will take the daily average (Data Set A) of 93 cfs and multiple it by 24 hrs in a day which would equal 2232 . I'll then sum the hourly cfs values for all 24hrs of each day (Data Set B)... which in this case for 1/1/2021 would equal 2596. Normally manipulating a rate in these manners doesn't make sense but in this case it doesn't matter because the units cancel out. I'd then need to take these values and divide them by each other 2232/2596 = 0.8597 and apply that to the hourly cfs values for all 24hrs of each day (Data Set B) for a new "scaled" dataset (to be Data Set C).
My problem is that I have never coded in Python using two different input datasets (I am a complete newbie). I started experimenting with the code but the problem is - is I can't seem to integrate the two datasets. If anyone can point me in the direction of how to integrate two separate input files I'd be most appreciative. Beneath the datasets is my attempts at the code (please note the reverse order of code - working first with hourly data (Data Set B) and then the daily data (Data Set A). My print out of the final scaling factor (SF) is only giving me one print out... not all 8,760 because I'm not in the loop... but how can I be in the loop of both input files at the same time???
Data Set A (Daily) -- 365 lines of data:
1/1/2021 93 cfs
1/2/2021 0 cfs
1/3/2021 70 cfs
1/4/2021 70 cfs
Data Set B (Hourly) -- 8,760 lines of data:
1/1/2021 0:00 150 cfs
1/1/2021 1:00 0 cfs
1/1/2021 2:00 255 cfs
(where summation of all 24 hrs of 1/1/2021 = 2596 cfs)
etc.
Sorry if this is a ridiculously easy question... I am very new to coding.
Here is the code that I've written so far... what I need is 8,760 lines of SF... that I can then use to multiple by the original Data Set B. The final product of Data Set C will be Date - Time - rescaled hourly data. I actually have to do this for three pumping units total... to give me a matrix of 5 columns by 8,760 rows but I think I'll be able to figure the unit thing out. My problem now is how to integrate the two data sets. Thank you for reading!
print('Solving the Temperature Model programming problem')
fhand1 = open('Interpolate_CY21_short.txt')
fhand2 = open('WSE_Daily_CY21_short.txt')
#Hourly Interpolated Pardee PowerHouse Data
for line1 in fhand1:
line1 = line1.rstrip()
words1 = line1.split()
#Hourly interpolated data - parsed down (cfs)
x = float(words1[7])
if x<100:
x = 0
#print(x)
#WSE Daily Average PowerHouse Data
for line2 in fhand2:
line2 = line2.rstrip()
words2 = line2.split()
#Daily cfs average x 24 hrs
aa = float(words2[2])*24
#print(a)
SF = x * aa
print(SF)
This is how you would get the data into two lists,
fhand1 = open('Interpolate_CY21_short.txt', 'r')
fhand2 = open('WSE_Daily_CY21_short.txt', 'r')
daily_average = fhand1.readlines()
daily = fhand2.readlines()
# this is what the to lists would look like, roughly
# each line would be a separate string
daily_average = ["1/1/2021 93 cfs","1/2/2021 0 cfs"]
daily = ["1/1/2021 0:00 150 cfs", "1/1/2021 1:00 0 cfs", "1/2/2021 1:00 0 cfs"]
Then, to process the lists could probably use a double nested for loop
for average_line in daily_average:
average_line = average_line.rstrip()
average_date, average_count, average_symbol = average_line.split()
for daily_line in daily:
daily_line = daily_line.rstrip()
date, hour, count, symbol = daily_line.split()
if average_date == date:
print(f"date={date}, average_count={average_count} count={count}")
Or a dictionary
# populate data into dictionaries
daily_average_data = dict()
for line in daily_average:
line = line.rstrip()
day, count, symbol = line.split()
daily_average_data[day] = (day, count, symbol)
daily_data = dict()
for line in daily:
line = line.rstrip()
day, hour, count, symbol = line.split()
if day not in daily_data:
daily_data[day] = list()
daily_data[day].append((day, hour, count, symbol))
# now you can access daily_average_data and daily_data as
# dictionaries instead of files
# process data
result = list()
for date in daily_data.keys():
print(date)
print(daily_average_data[date])
print(daily_data[date])
If the data items corresponded with one another line by line, you could use https://realpython.com/python-zip-function/
here is an example:
for data1, data2 in zip(daiy_average, daily):
print(f"{data1} {data2}")
Similar to what #oasispolo decribed, the solution is to make a single loop and process both lists in it. I'm personally not fond of the "zip" function. (It's a purely stylistic objection; lots of other people like it and that's fine.)
Here's a solution with syntax that I find more intuitive:
print('Solving the Temperature Model programming problem')
fhand1 = open('Interpolate_CY21_short.txt', 'r')
fhand2 = open('WSE_Daily_CY21_short.txt', 'r')
# Convert each file into a list of lines. You're doing this
# implicitly, but I like to be explicit about it.
lines1 = fhand1.readlines()
lines2 = fhand2.readlines()
if len(lines1) != len(lines2):
raise ValueError("The two files have different length!")
# Initialize an output array. You cold also construct it
# one item at a time, but that can be slow for large arrays.
# It is more efficient to initialize the entire array at
# once if possible.
sf_list = [0]*len(lines1)
for position in range(len(lines1)):
# range(L) generates numbers 0...L-1
line1 = lines1[position].rstrip()
words1 = line1.split()
x = float(words1[7])
if x<100:
x = 0
line2 = lines2[position].rstrip()
words2 = line2.split()
aa = float(words2[2])*24
sf_list[position] = x * aa
print(sf_list)
I have a dataset in json with gps coordinates:
"utc_date_and_time":"2021-06-05 13:54:34", # timestamp
"hdg":"018.0", # heading
"sog":"000.0", # speed
"lat":"5905.3262N", # latitude
"lon":"00554.2433E" # longitude
This data will be imported into a database, with one entry every second for every "vessel".
As you can imagine this is a huge amount of data that provides a level of accuracy I do not need.
My goal:
Create a new entry in the database for every X seconds
If I set X to 60 (a minute) and there are missing 10 entries within this period, 50 entries should be used. Data can be missing for certain periods, and I do not want this to create bogus positions.
Use timestamp from last entry in period.
Use the heading (hdg) that is appearing the most times within this period.
Calculate average speed within this period.
Latitude and longitude could use the last entry, but I have seen "spikes" that needs to be filtered out, or use average, and remove values that differ too much.
My script is now pushing all the data to the database via a for loop with different data-checks inside it, and this is working.
I am new to python and still learning every day through reading and youtube videos, but it would be great if anyone could point me in the right direction for how to achieve the above goal.
As of now the data is imported into a dictionary. And I am wondering if creating a dictionary where the timestamp is the key is the way to go, but I am a little lost.
Code:
import os
import json
from pathlib import Path
from datetime import datetime, timedelta, date
def generator(data):
for entry in data:
yield entry
data = json.load(open("5_gps_2021-06-05T141524.1397180000.json"))["gps_data"]
gps_count = len(data)
start_time = None
new_gps = list()
tempdata = list()
seconds = 60
i = 0
for entry in generator(data):
i = i+1
if start_time == None:
start_time = datetime.fromisoformat(entry['utc_date_and_time'])
# TODO: Filter out values with too much deviation
tempdata.append(entry)
elapsed = (datetime.fromisoformat(entry['utc_date_and_time']) - start_time).total_seconds()
if (elapsed >= seconds) or (i == gps_count):
# TODO: Calculate average values etc. instead of using last
new_gps.append(tempdata)
tempdata = []
start_time = None
print("GPS count before:" + str(gps_count))
print("GPS count after:" + str(len(new_gps)))
Output:
GPS count before:1186
GPS count after:20
I have a huuge csv file (524 MB, notepad opens it for 4 minutes) that I need to change formatting of. Now it's like this:
1315922016 5.800000000000 1.000000000000
1315922024 5.830000000000 3.000000000000
1315922029 5.900000000000 1.000000000000
1315922034 6.000000000000 20.000000000000
1315924373 5.950000000000 12.452100000000
The lines are divided by a newline symbol, when I paste it into Excel it divides into lines. I would've done it by using Excel functions but the file is too big to be opened.
First value is the number of seconds since 1-01-1970, second is price, third is volumen.
I need it to be like this:
01-01-2009 13:55:59 5.800000000000 1.000000000000 01-01-2009 13:56:00 5.830000000000 3.000000000000
etc.
Records need to be divided by a space. Sometimes there are multiple values of price from the same second like this:
1328031552 6.100000000000 2.000000000000
1328031553 6.110000000000 0.342951630000
1328031553 6.110000000000 0.527604200000
1328031553 6.110000000000 0.876088370000
1328031553 6.110000000000 0.971026920000
1328031553 6.100000000000 0.965781090000
1328031589 6.150000000000 0.918752490000
1328031589 6.150000000000 0.940974100000
When this happens, I need the code to take average price from that second and save just one price for each second.
These are bitcoin transactions which didn't happen every second when BTC started.
When there is no record from some second, there needs to be created a new record with the following second and the values of price and volumen copied from the last known price and volumen.
Then save everything to a new txt file.
I can't seem to do it, I've been trying to write a converter in python for hours, please help.
shlex is a lexical parser. We use it to pick the numbers from the input one at a time. Function records groups these into lists where the first element of the list is an integer and the other elements are floating points.
The loop reads the results of records and averages on times as necessary. It also prints two outputs to a line.
from shlex import shlex
lexer = shlex(instream=open('temp.txt'), posix=False)
lexer.wordchars = r'0123456789.\n'
lexer.whitespace = ' \n'
lexer.whitespace_split = True
import time
def Records():
record = []
while True:
token = lexer.get_token()
if token:
token = token.strip()
if token:
record.append(token)
if len(record)==3:
record[0] = int(record[0])
record[1] = float(record[1])
record[2] = float(record[2])
yield record
record=[]
else:
break
else:
break
def conv_time(t):
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(t))
records = Records()
pos = 1
current_date, price, volume = next(records)
price_sum = price
volume_sum = volume
count = 1
for raw_date, price, volume in records:
if raw_date == current_date:
price_sum += price
volume_sum += volume
count += 1
else:
print (conv_time(current_date), price_sum/count, volume_sum/count, end=' ' if pos else '\n')
pos = (pos+1)%2
current_date = raw_date
price_sum = price
volume_sum = volume
count = 1
print (conv_time(current_date), price_sum/count, volume_sum/count, end=' ' if pos else '\n')
Here are the results. You might need to do something about significant digits to the rights of decimal points.
2011-09-13 09:53:36 5.8 1.0 2011-09-13 09:53:44 5.83 3.0
2011-09-13 09:53:49 5.9 1.0 2011-09-13 09:53:54 6.0 20.0
2011-09-13 10:32:53 5.95 12.4521 2012-01-31 12:39:12 6.1 2.0
2012-01-31 12:39:13 6.108 0.736690442 2012-01-31 12:39:49 6.15 0.9298632950000001
1) Reading a single line from a file
data = {}
with open(<path to file>) as fh:
while True:
line = fh.readline()[:-1]
if not line: break
values = line.split(' ')
for n in range(0, len(values), 3):
dt, price, volumen = values[n:n+3]
2) Checking if it's the next second after the last record's
If so, adding the price and volumen values to a variable and increasing a counter for later use in calculating the average
3) If the second is not the next second, copy values of last price and volumen.
if not dt in data:
data[dt] = []
data[dt].append((price, volumen))
4) Divide timestamps like "1328031552" into seconds, minutes, hours, days, months, years.
Somehow take care of gap years.
for dt in data:
# seconds, minutes, hours, days, months, years = datetime (dt)
... for later use in calculating the average
p_sum, v_sum = 0
for p, v in data[dt]:
p_sum += p
v_sum += v
n = len(data[dt])
price = p_sum / n
volumen = v_sum / n
5) Arrange values in the 01-01-2009 13:55:59 1586.12 220000 order
6) Add the record to the end of the new database file.
print(datetime, price, volumen)
Python beginner here. I am trying to make us of some data stored in a dictionary.
I have some .npy files in a folder. It is my intention to build a dictionary that encapsulates the following: reading of the map, done with np.load, the year, month, and date of the current map (as integers), the fractional time in years (given that a month has 30 days - it does not affect my calculations afterwards), and the number of pixels, and number of pixels above a certain value. At the end I expect to get a dictionary like:
{'map0':'array(from np.load)', 'year', 'month', 'day', 'fractional_time', 'pixels'
'map1':'....}
What I managed until now is the following:
import glob
file_list = glob.glob('*.npy')
def only_numbers(seq): #for getting rid of any '.npy' or any other string
seq_type= type(seq)
return seq_type().join(filter(seq_type.isdigit, seq))
maps = {}
for i in range(0, len(file_list)-1):
maps[i] = np.load(file_list[i])
numbers[i]=list(only_numbers(file_list[i]))
I have no idea how to to get a dictionary to have more values that are under the for loop. I can only manage to generate a new dictionary, or a list (e.g. numbers) for every task. For the numbers dictionary, I have no idea how to manipulate the date in the format YYYYMMDD to get the integers I am looking for.
For the pixels, I managed to get it for a single map, using:
data = np.load('20100620.npy')
print('Total pixel count: ', data.size)
c = (data > 50).astype(int)
print('Pixel >50%: ',np.count_nonzero(c))
Any hints? Until now, image processing seems to be quite a challenge.
Edit: Managed to split the dates and make them integers using
date=list(only_numbers.values())
year=int(date[i][0:4])
month=int(date[i][4:6])
day=int(date[i][6:8])
print (year, month, day)
If anyone is interested, I managed to do something else. I dropped the idea of a dictionary containing everything, as I needed to manipulate further easier. I did the following:
file_list = glob.glob('data/...') # files named YYYYMMDD.npy
file_list.sort()
def only_numbers(seq): # i make sure that i remove all characters and symbols from the name of the file
seq_type = type(seq)
return seq_type().join(filter(seq_type.isdigit, seq))
numbers = {}
time = []
np_above_value = []
for i in range(0, len(file_list) - 1):
maps = np.load(file_list[i])
maps[np.isnan(maps)] = 0 # had some NANs and getting some errors
numbers[i] = only_numbers(file_list[i]) # getting a dictionary with the name of the files that contain only the dates - calling the function I defined earlier
date = list(numbers.values()) # registering the name of the files (only the numbers) as a list
year = int(date[i][0:4]) # selecting first 4 values (YYYY) and transform them as integers, as required
month = int(date[i][4:6]) # selecting next 2 values (MM)
day = int(date[i][6:8]) # selecting next 2 values (DD)
time.append(year + ((month - 1) * 30 + day) / 360) # fractional time
print('Total pixel count for map '+ str(i) +':', maps.size) # total number of pixels for the current map in iteration
c = (maps > value).astype(int)
np_above_value.append (np.count_nonzero(c)) # list of the pixels with a value bigger than value
print('Pixels with concentration >value% for map '+ str(i) +':', np.count_nonzero(c)) # total number of pixels with a value bigger than value for the current map in iteration
plt.plot(time, np_above_value) # pixels with concentration above value as a function of time
I know it might be very clumsy. Second week of python, so please overlook that. It does the trick :)