Each driver (vehicle) needs a 30 min break after 8 hours on-duty. Referencing a previous issue, here is my code:
# Add breaks
node_visit_transit = {}
for n in range(routing.Size()):
if n >= len(data['node_name']):
node_visit_transit[n] = 0
else:
if n in data['pickups']:
node_visit_transit[n] = int(data['load_time'][n])
else:
node_visit_transit[n] = int(data['unload_time'][n])
break_intervals = {}
for v in range(data['num_vehicles']):
break_intervals[v] = [
routing.solver().FixedDurationIntervalVar(
MAX_TIME_BETWEEN_BREAKS, MAX_TIME_BETWEEN_BREAKS, BREAK_DURATION, False, 'Break for vehicle {}'.format(v))
]
time_dimension.SetBreakIntervalsOfVehicle(
break_intervals[v], v, node_visit_transit)
However, no solution is found when this snippet is added, whereas a solution existed previously. A few questions:
What is node_visit_transit used for? From the Github code it seems to account for some form of loading time (based on load demand), so I adapted it for load/unload time here.
Am I using FixedDurationIntervalVar correctly? I would like drivers to have a BREAK_DURATION break after they've been on duty for MAX_TIME_BETWEEN_BREAKS hours, regardless of start time. I'm concerned this is forcing them to take a break at the absolute time MAX_TIME_BETWEEN_BREAKS, i.e. if MAX_TIME_BETWEEN_BREAKS = 8 then they all take a break at 08:00.
It seems like someone has figured this out here, but there is no solution example.
Related
I have a problem regarding a competition question I'm attempting to do. Here is the question (its a bit long)
""""
Welcome aboard, Captain! Today you are in charge of the first ever doughnut-shaped spaceship, The
Circular. There are N cabins arranged in a circle on the spaceship. They are numbered from 1 to N in
a clockwise direction around the ship. The ith and the (i + 1)th cabins are connected. So too are cabin
1 and cabin N.
Currently the ith cabin has Ai crewmates, however the spaceship cannot depart unless there are exactly
Bi crewmates in this cabin.
To achieve this, you have the power to pay crewmates to change cabins. You can pay a crewmate $1 to
move to an adjacent cabin. A crewmate can be asked to move multiple times, provided that you pay
them $1 each time.
What is the fewest dollars you must pay before you can depart? It is always be possible to depart.
""""
https://orac2.info/problem/aio22spaceship/ (the link to the intereactive Qs)
I searched the web and i found no solutions to the Q. My code seems to be infinite looping i guess but im not sure as i cant see what cases the sit uses to determine if my code is right.
Heres my code
#!/usr/bin/env python
import sys
sys.setrecursionlimit(1000000000)
#
# Solution Template for Spaceship Shuffle
#
# Australian Informatics Olympiad 2022
#
# This file is provided to assist with reading and writing of the input
# files for the problem. You may modify this file however you wish, or
# you may choose not to use this file at all.
#
# N is the number of cabins.
N = None
# A contains the initial number of crewmates in each cabin. Note that here the
# cabins are numbered starting from 0.
A = []
# B contains the desired number of crewmates in each cabin. Note that here the
# cabins are numbered starting from 0.
B = []
answer = 0
# Open the input and output files.
input_file = open("spacein.txt", "r")
output_file = open("spaceout.txt", "w")
# Read the value of N.
N = int(input_file.readline().strip())
# Read the values of A and B.
input_line = input_file.readline().strip()
A = list(map(int, input_line.split()))
input_line = input_file.readline().strip()
B = list(map(int, input_line.split()))
AM = A
#AM is my modifying set
# TODO: This is where you should compute your solution. Store the fewest
# dollars you must pay before you can depart into the variable
while AM != B:
#Check if the set is correct
#notfound is a testing variable to see if my code was looping due to input error
notfound = True
for i in range(N):
#Check which places needs people to be moved
while AM[i]>B[i]:
notfound = False
#RV and LV check the "neediness" for each half's people requirements. I check how many people
#are needed on one side compared to the other and subtract the "overflow of people"
RV = 0
LV = 0
for j in range(int(N/2-0.5)):
#The range thing makes sure that if N is odd, im splitting the middle but if N is even, i leave out the end pod
RV += B[(i+j+1)%N]-AM[(i+j+1)%N]
LV += B[(i-j-1)%N]-AM[(i-j-1)%N]
answer +=1
if RV>LV:
AM[i]+=-1
AM[(i+1)%N]+=1
else:
AM[i]+=-1
AM[(i-1)%N]+=1
print(AM,B)
if notfound:
break
print(answer)
# Write the answer to the output file.
output_file.write("%d\n" % (answer))
# Finally, close the input/output files.
input_file.close()
output_file.close()
please help i really neeed to know the answer, driving me mad ngl
Welp, there aren't any resources online and I've tried everything. I think the problem might be that because of my solving method, passengers may be flicked between two pods indefinitely. Not sure since i could make a case that demoed this.
also my post probably is messy since this is my first time posting
thanks to stack overflow I was able to get help to finish one of the first steps to my project. First steps were to receive 2 files as txt, merge and sort by date. Currently, I am stuck on trying to loop over a list with 3 fields [Date, Buy/Sell, Quantity]. This part of the program must
Check list for first date and if it is a purchase, add to walletList.
If it is a sale, subtract from the previous WalletList Balance.
Create a function to get daily avg. price.
from typing import List
def file_to_list(file: str) -> list:
list= []
with open(file) as f:
for line in f:
list.append(line.strip().replace(";",","))
return list
def merge_list():
bitcoin = file_to_list("bitcoin.txt")
exchange = file_to_list("exchange.txt")
combined = bitcoin + exchange
sorted_combined = sorted(combined)
return sorted_combined
def getBuyorSell(sorted_combined:List):
walletList = []
#transactionDate = [line.split(",")[0] for line in sorted_combined]
#transactionType = [line.split(",")[1] for line in sorted_combined]
for line in sorted_combined:
if line.split(",")[1] == "buy":
walletList.append(line.split(",")[2])
for line in sorted_combined:
if line.split(",")[1] == "sell":
walletList.remove(line.split(",")[2])
str2int = [eval(i) for i in walletList]
totalWallet = sum(str2int)
print(f"Total of wallet: {totalWallet}")
if __name__ == "__main__":
sorted_combined = merge_list()
show_transactions(sorted_combined)
getBuyorSell(sorted_combined)
Price is fixed # 20,000 for ease.
Will implement a price getter for each day in the future.
bitcoin.txt
2022-08-01;buy;100
2022-08-04;buy;50
2022-08-06;buy;10
exchange.txt
2022-08-02;buy;200
2022-08-03;sell;50
2022-08-05;sell;25
Note: getBuyorSell function is sloppy and not working, I am aware of this. Just showed it for all to be able to see my thought process. My work day is over and the office is closing so I won't be able to work on this till tomorrow figured id post it anyways. if this is not okay, I can come back tomorrow to ask a more detailed and better-formed question with a more complete code. However, if anyone can help in the meantime to help guide me, it would be greatly appreciated. (I know I must use calculations to get the avg. price, just used Remove to show the thought process.)
My Idea is as follows and i want to really get to learn more about programming and how to structure a program:
I want to let count waves on a stock chart.
Within the Elliott Wave Rules are some specifications, like (most basic):
Wave 2 never retraces more than 100% of wave 1.
Wave 3 cannot be the shortest of the three impulse waves, namely waves 1, 3 and 5.
Wave 4 does not overlap with the price territory of wave 1, except in the
rare case of a diagonal triangle formation.
(from Wikipedia https://en.wikipedia.org/wiki/Elliott_wave_principle#Wave_rules_and_guidelines)
There are more sophisticated rules of course, but in my imagination, they could be addressed by the same iterative logic like in which I want to apply my rules.
Please guys, and girls, give me feedback on my thoughts if they make any sense in structure and layout to set up a program or not, because i lack experience here:
I want to find the minima and maxima, and give them a wavecount depending on the minima and maxima before.
Therefore i would check every candle (every closing price, day, hour, etc) if the value is below or above the previous value and also values. For example:
If there are two candles going up, then one down, then three up, then two down, then two up, this could be a complete Impulsewave, according to the above-listed rules. In total, i would have 10 candles and the following rules must apply:
The third candle (or the first that goes down, after the two going up) must not close below the starting price of the initial candle. AND also it must be met, that the following candles (how much that would become) must all go up in a row, unless they overcome the price of the previous maxima (the second candle).
When the price starts to drop again, it could be counted as wave 4 then (second minima in a sequence) and when it goes up again, this would indicate wave 5.
Then it also must be met, that, if the price starts to go down again, it does not close below the first maxima (in this case the second candle).
And so on and so on.
My question now is: Is this kind of looping through certain data points is even a appropriate way to approach that kind of project? Or am I totally wrong here?
I just thought: because of the fractal character of Elliott waves, I would only need very basic rules, that would depend on, what the same iterative process spits out the previous times it is scanning data points.
What do you think?
Is there a better, a smarter way to realise what i am planing to do?
And also, how I could do this in a good way?
Maybe there is also a way to just feed some patterns into a predefined execution structure and then let this run over data points just as price charts.
What would your approach look like?
Thanks a lot and best wishes, Benjamin
Here is my idea/code for finding highs and lows. It's doenst work standalone. If you have any idea, how it can help to find waves, let me know.
import pandas as pd
import config.Text
class AnalyzerHighLow(object):
def __init__(self, df):
self.high_low = None
self.df = df.close.values
self.highs = pd.DataFrame(columns=[config.Text.date, config.Text.extrema, config.Text.type])
self.lows = pd.DataFrame(columns=[config.Text.date, config.Text.extrema, config.Text.type])
def highlow(self):
idx_start = 0
self.find_high(self.df, idx_start)
self.find_low(self.df, idx_start)
self.high_low = pd.concat([self.highs, self.lows], ignore_index=True, sort=True, axis=0)
self.high_low = self.high_low.sort_values(by=[config.Text.date])
self.high_low = self.high_low.reset_index(drop=True)
return self.high_low
def find_high(self, high_low, idx_start):
pvt_high = high_low[idx_start]
reached = False
for i in range(idx_start + 1, len(high_low)):
act_high = high_low[i]
if act_high > pvt_high:
reached = True
pvt_high = act_high
elif act_high < pvt_high and reached is True:
self.highs.loc[i - 1] = [i - 1, pvt_high, config.Text.maxima]
return self.find_high(high_low, i)
elif act_high < pvt_high:
pvt_high = high_low[i]
if (reached is True) and (i == (len(high_low))):
self.highs.loc[i - 1] = [i - 1, pvt_high, config.Text.maxima]
def find_low(self, high_low, idx_start):
pvt_low = high_low[idx_start]
reached = False
for i in range(idx_start + 1, len(high_low)):
act_low = high_low[i]
if act_low < pvt_low:
reached = True
pvt_low = act_low
elif act_low > pvt_low and reached is True:
self.lows.loc[i - 1] = [i - 1, pvt_low, config.Text.minima]
return self.find_low(high_low, i)
elif act_low > pvt_low:
pvt_low = high_low[i]
if (reached is True) and (i == (len(high_low) - 1)):
self.lows.loc[i - 1] = [i - 1, pvt_low, config.Text.minima]
I am trying to get a list of all JIRA issues so that I may iterate through them in the following manner:
from jira import JIRA
jira = JIRA(basic_auth=('username', 'password'), options={'server':'https://MY_JIRA.atlassian.net'})
issue = jira.issue('ISSUE_KEY')
print(issue.fields.project.key)
print(issue.fields.issuetype.name)
print(issue.fields.reporter.displayName)
print(issue.fields.summary)
print(issue.fields.comment.comments)
The code above returns the desired fields (but only an issue at a time), however, I need to be able to pass a list of all issue keys into:
issue = jira.issue('ISSUE_KEY')
The idea is to write a for loop that would go through this list and print the indicated fields.
I have not been able to populate this list.
Can someone point me in the right direction please?
def get_all_issues(jira_client, project_name, fields):
issues = []
i = 0
chunk_size = 100
while True:
chunk = jira_client.search_issues(f'project = {project_name}', startAt=i, maxResults=chunk_size, fields=fields)
i += chunk_size
issues += chunk.iterable
if i >= chunk.total:
break
return issues
issues = get_all_issues(jira, 'JIR', ["id", "fixVersion"])
options = {'server': 'YOUR SERVER NAME'}
jira = JIRA(options, basic_auth=('YOUR EMAIL', 'YOUR PASSWORD'))
size = 100
initial = 0
while True:
start= initial*size
issues = jira.search_issues('project=<NAME OR ID>', start,size)
if len(issues) == 0:
break
initial += 1
for issue in issues:
print 'ticket-no=',issue
print 'IssueType=',issue.fields.issuetype.name
print 'Status=',issue.fields.status.name
print 'Summary=',issue.fields.summary
The first 3 arguments of jira.search_issues() are the jql query, starting index (0 based hence the need for multiplying on line 6) and the maximum number of results.
You can execute a search instead of a single issue get.
Let's say your project key is PRO-KEY, to perform a search, you have to use this query:
https://MY_JIRA.atlassian.net/rest/api/2/search?jql=project=PRO-KEY
This will return the first 50 issues of the PRO-KEY and a number, in the field maxResults, of the total number of issues present.
Taken than number, you can perform others searches adding the to the previous query:
&startAt=50
With this new parameter you will be able to fetch the issues from 51 to 100 (or 50 to 99 if you consider the first issue 0).
The next query will be &startAt=100 and so on until you reach fetch all the issues in PRO-KEY.
If you wish to fetch more than 50 issues, add to the query:
&maxResults=200
You can use the jira.search_issues() method to pass in a JQL query. It will return the list of issues matching the JQL:
issues_in_proj = jira.search_issues('project=PROJ')
This will give you a list of issues that you can iterate through
Starting with Python3.8 reading all issues can be done relatively short and elegant:
issues = []
while issues_chunk := jira.search_issues('project=PROJ', startAt=len(issues)):
issues += list(issue issues_chunk)
(since we need len(issues) in every step we cannot use a list comprehension, can we?
Together with initialization and cashing and "preprocessing" (e.g. just taking issue.raw) you could write something like this:
jira = jira.JIRA(
server="https://jira.at-home.com",
basic_auth=json.load(open(os.path.expanduser("~/.jira-credentials")))
validate=True,
)
issues = json.load(open("jira_issues.json"))
while issues_chunk := jira.search_issues('project=PROJ', startAt=len(issues)):
issues += [issue.raw for issue issues_chunk]
json.dump(issues, open("jira_issues.json", "w"))
We have a server that gets cranky if it gets too many users logging in at the same time (meaning less than 7 seconds apart). Once the users are logged in, there is no problem (one or two logging in at the same time is also not a problem, but when 10-20 try the entire server goes into a death spiral sigh).
I'm attempting to write a page that will hold onto users (displaying an animated countdown etc.) and let them through 7 seconds apart. The algorithm is simple
fetch the timestamp (t) when the last login happened
if t+7 is in the past start the login and store now() as the new timestamp
if t+7 is in the future, store it as the new timestamp, wait until t+7, then start the login.
A straight forward python/redis implementation would be:
import time, redis
SLOT_LENGTH = 7 # seconds
now = time.time()
r = redis.StrictRedis()
# lines below contain race condition..
last_start = float(r.get('FLOWCONTROL') or '0.0') # 0.0 == time-before-time
my_start = last_start + SLOT_LENGTH
r.set('FLOWCONTROL', max(my_start, now))
wait_period = max(0, my_start - now)
time.sleep(wait_period)
# .. login
The race condition here is obvious, many processes can be at the my_start = line simultaneously. How can I solve this using redis?
I've tried the redis-py pipeline functionality, but of course that doesn't get an actual value until in the r.get() call...
I'll document the answer in case anyone else finds this...
r = redis.StrictRedis()
with r.pipeline() as p:
while 1:
try:
p.watch('FLOWCONTROL') # --> immediate mode
last_slot = float(p.get('FLOWCONTROL') or '0.0')
p.multi() # --> back to buffered mode
my_slot = last_slot + SLOT_LENGTH
p.set('FLOWCONTROL', max(my_slot, now))
p.execute() # raises WatchError if anyone changed TCTR-FLOWCONTROL
break # break out of while loop
except WatchError:
pass # someone else got there before us, retry.
a little more complex than the original three lines...