Building a greedy task scheduler - Python algorithm - python
Working on the following Leetcode problem: https://leetcode.com/problems/task-scheduler/
Given a char array representing tasks CPU need to do. It contains
capital letters A to Z where different letters represent different
tasks.Tasks could be done without original order. Each task could be
done in one interval. For each interval, CPU could finish one task or
just be idle.
However, there is a non-negative cooling interval n that means between
two same tasks, there must be at least n intervals that CPU are doing
different tasks or just be idle.
You need to return the least number of intervals the CPU will take to
finish all the given tasks.
Example:
Input: tasks = ["A","A","A","B","B","B"], n = 2
Output: 8
Explanation: A -> B -> idle -> A -> B -> idle -> A -> B.
I've written code that passes the majority of the Leetcode tests cases, but is failing on a very large input. Here's my code:
import heapq
from collections import Counter
class Solution(object):
def leastInterval(self, tasks, n):
CLOCK = 0
if not tasks:
return len(tasks)
counts = Counter(tasks)
unvisited_tasks = counts.most_common()[::-1]
starting_task, _ = unvisited_tasks.pop()
queue = [[0, starting_task]]
while queue or unvisited_tasks:
while queue and CLOCK >= queue[0][0]:
_, task = heapq.heappop(queue)
counts[task] -= 1
if counts[task] > 0:
heapq.heappush(queue, [CLOCK + 1 + n, task])
CLOCK += 1
if unvisited_tasks:
t, _ = unvisited_tasks.pop()
heapq.heappush(queue, [0, t])
else:
# must go idle
if queue:
CLOCK += 1
return CLOCK
Here's the (large) input case:
tasks = ["G","C","A","H","A","G","G","F","G","J","H","C","A","G","E","A","H","E","F","D","B","D","H","H","E","G","F","B","C","G","F","H","J","F","A","C","G","D","I","J","A","G","D","F","B","F","H","I","G","J","G","H","F","E","H","J","C","E","H","F","C","E","F","H","H","I","G","A","G","D","C","B","I","D","B","C","J","I","B","G","C","H","D","I","A","B","A","J","C","E","B","F","B","J","J","D","D","H","I","I","B","A","E","H","J","J","A","J","E","H","G","B","F","C","H","C","B","J","B","A","H","B","D","I","F","A","E","J","H","C","E","G","F","G","B","G","C","G","A","H","E","F","H","F","C","G","B","I","E","B","J","D","B","B","G","C","A","J","B","J","J","F","J","C","A","G","J","E","G","J","C","D","D","A","I","A","J","F","H","J","D","D","D","C","E","D","D","F","B","A","J","D","I","H","B","A","F","E","B","J","A","H","D","E","I","B","H","C","C","C","G","C","B","E","A","G","H","H","A","I","A","B","A","D","A","I","E","C","C","D","A","B","H","D","E","C","A","H","B","I","A","B","E","H","C","B","A","D","H","E","J","B","J","A","B","G","J","J","F","F","H","I","A","H","F","C","H","D","H","C","C","E","I","G","J","H","D","E","I","J","C","C","H","J","C","G","I","E","D","E","H","J","A","H","D","A","B","F","I","F","J","J","H","D","I","C","G","J","C","C","D","B","E","B","E","B","G","B","A","C","F","E","H","B","D","C","H","F","A","I","A","E","J","F","A","E","B","I","G","H","D","B","F","D","B","I","B","E","D","I","D","F","A","E","H","B","I","G","F","D","E","B","E","C","C","C","J","J","C","H","I","B","H","F","H","F","D","J","D","D","H","H","C","D","A","J","D","F","D","G","B","I","F","J","J","C","C","I","F","G","F","C","E","G","E","F","D","A","I","I","H","G","H","H","A","J","D","J","G","F","G","E","E","A","H","B","G","A","J","J","E","I","H","A","G","E","C","D","I","B","E","A","G","A","C","E","B","J","C","B","A","D","J","E","J","I","F","F","C","B","I","H","C","F","B","C","G","D","A","A","B","F","C","D","B","I","I","H","H","J","A","F","J","F","J","F","H","G","F","D","J","G","I","E","B","C","G","I","F","F","J","H","H","G","A","A","J","C","G","F","B","A","A","E","E","A","E","I","G","F","D","B","I","F","A","B","J","F","F","J","B","F","J","F","J","F","I","E","J","H","D","G","G","D","F","G","B","J","F","J","A","J","E","G","H","I","E","G","D","I","B","D","J","A","A","G","A","I","I","A","A","I","I","H","E","C","A","G","I","F","F","C","D","J","J","I","A","A","F","C","J","G","C","C","H","E","A","H","F","B","J","G","I","A","A","H","G","B","E","G","D","I","C","G","J","C","C","I","H","B","D","J","H","B","J","H","B","F","J","E","J","A","G","H","B","E","H","B","F","F","H","E","B","E","G","H","J","G","J","B","H","C","H","A","A","B","E","I","H","B","I","D","J","J","C","D","G","I","J","G","J","D","F","J","E","F","D","E","B","D","B","C","B","B","C","C","I","F","D","E","I","G","G","I","B","H","G","J","A","A","H","I","I","H","A","I","F","C","D","A","C","G","E","G","E","E","H","D","C","G","D","I","A","G","G","D","A","H","H","I","F","E","I","A","D","H","B","B","G","I","C","G","B","I","I","D","F","F","C","C","A","I","E","A","E","J","A","H","C","D","A","C","B","G","H","G","J","G","I","H","B","A","C","H","I","D","D","C","F","G","B","H","E","B","B","H","C","B","G","G","C","F","B","E","J","B","B","I","D","H","D","I","I","A","A","H","G","F","B","J","F","D","E","G","F","A","G","G","D","A","B","B","B","J","A","F","H","H","D","C","J","I","A","H","G","C","J","I","F","J","C","A","E","C","H","J","H","H","F","G","E","A","C","F","J","H","D","G","G","D","D","C","B","H","B","C","E","F","B","D","J","H","J","J","J","A","F","F","D","E","F","C","I","B","H","H","D","E","A","I","A","B","F","G","F","F","I","E","E","G","A","I","D","F","C","H","E","C","G","H","F","F","H","J","H","G","A","E","H","B","G","G","D","D","D","F","I","A","F","F","D","E","H","J","E","D","D","A","J","F","E","E","E","F","I","D","A","F","F","J","E","I","J","D","D","G","A","C","G","G","I","E","G","E","H","E","D","E","J","B","G","I","J","C","H","C","C","A","A","B","C","G","B","D","I","D","E","H","J","J","B","F","E","J","H","H","I","G","B","D"]
n = 1
My code is outputting an interval count of 1002, and the correct answer is 1000. Because the input size is so large, I'm having trouble debugging by hand on where this is going wrong.
My algorithm essentially does the following:
Build a mapping of character to number of occurrences
Start with the task that occurs the largest number of times.
When you visit a task, enqueue the next task to be CLOCK + interval iterations later, because my premise is that you want to visit a task as soon as you're able to do so.
If can't visit an already-visited task, enqueue a new one, and do so without incrementing the clock.
If you have elements in the queue, but not enough time has passed, increment the clock.
At the end, the CLOCK variable describes how long (in other words, how many "intervals") passed before you're able to run all tasks.
Can someone spot the bug in my logic?
Consider a case where the delay n=1, and you have a task distribution like so, for which the least number of cycles is just the length of the list (the tasks could be run like "ABCABC...D"):
{"A": 100, "B": 100, "C": 99, "D": 1 } # { "task": <# of occurrences>, ...
Using your algorithm, you would process all the cases of "A" and "B" first, since you want to move onto the next task in the same type as soon as possible, without considering other task types. After processing those two, you're left with:
{"C": 99, "D": 1}
which results in at least 96 idle cycles.
To fix this, the ideal task configuration would be something like a round robin of sorts.
Related
random.random() generates same number in multiprocessing
I'm working on an optimization problem, and you can see a simplified version of my code posted below (the origin code is too complicated for asking such a question, and I hope my simplified code has simulated the original one as much as possible). My purpose: use the function foo in the function optimization, but foo can take very long time due to some hard situations. So I use multiprocessing to set a time limit for execution of the function (proc.join(iter_time), the method is from an anwser from this question; How to limit execution time of a function call?). My problem: In the while loop, every time the generated values for extra are the same. The list lst's length is always 1, which means in every iteration in the while loop it starts from an empty list. My guess: possible reason can be each time I create a process the random seed is counting from the beginning, and each time the process is terminated, there could be some garbage collection mechanism to clean the memory the processused, so the list is cleared. My question Anyone know the real reason of such problems? if not using multiprocessing, is there anyway else that I can realize my purpose while generate different random numbers? btw I have tried func_timeout but it has other problems that I cannot handle... random.seed(123) lst = [] # a global list for logging data def foo(epoch): ... extra = random.random() lst.append(epoch + extra) ... def optimization(loop_time, iter_time): start = time.time() epoch = 0 while time.time() <= start + loop_time: proc = multiprocessing.Process(target=foo, args=(epoch,)) proc.start() proc.join(iter_time) if proc.is_alive(): # if the process is not terminated within time limit print("Time out!") proc.terminate() if __name__ == '__main__': optimization(300, 2)
You need to use shared memory if you want to share variables across processes. This is because child processes do not share their memory space with the parent. Simplest way to do this here would be to use managed lists and delete the line where you set a number seed. This is what is causing same number to be generated because all child processes will take the same seed to generate the random numbers. To get different random numbers either don't set a seed, or pass a different seed to each process: import time, random from multiprocessing import Manager, Process def foo(epoch, lst): extra = random.random() lst.append(epoch + extra) def optimization(loop_time, iter_time, lst): start = time.time() epoch = 0 while time.time() <= start + loop_time: proc = Process(target=foo, args=(epoch, lst)) proc.start() proc.join(iter_time) if proc.is_alive(): # if the process is not terminated within time limit print("Time out!") proc.terminate() print(lst) if __name__ == '__main__': manager = Manager() lst = manager.list() optimization(10, 2, lst) Output [0.2035898948744943, 0.07617925389396074, 0.6416754412198231, 0.6712193790613651, 0.419777147554235, 0.732982735576982, 0.7137712131028766, 0.22875414425414997, 0.3181113880578589, 0.5613367673646847, 0.8699685474084119, 0.9005359611195111, 0.23695341111251134, 0.05994288664062197, 0.2306562314450149, 0.15575356275408125, 0.07435292814989103, 0.8542361251850187, 0.13139055891993145, 0.5015152768477814, 0.19864873743952582, 0.2313646288041601, 0.28992667535697736, 0.6265055915510219, 0.7265797043535446, 0.9202923318284002, 0.6321511834038631, 0.6728367262605407, 0.6586979597202935, 0.1309226720786667, 0.563889613032526, 0.389358766191921, 0.37260564565714316, 0.24684684162272597, 0.5982042933298861, 0.896663326233504, 0.7884030244369596, 0.6202229004466849, 0.4417549843477827, 0.37304274232635715, 0.5442716244427301, 0.9915536257041505, 0.46278512685707873, 0.4868394190894778, 0.2133187095154937] Keep in mind that using managers will affect performance of your code. Alternate to this, you could also use multiprocessing.Array, which is faster than managers but is less flexible in what data it can store, or Queues as well.
Factorial calculation with threading takes too long to finish, How to fix it?
I am trying to calculate whether a given number is prime or not with the formula : (n-1)! mod n =? (n-1) I must calculate the factorial with different threads and make them work simultaneously and control if they're all finished and if so then join them. By doing so I will be calculating factorial with different threads and then be able to take the modulo. However even though my code works fine with the small prime numbers it is taking too long to execute when the number is too big. I searched my code and couldn't really find alternative that can slow down the execution time. Here is my code : import threading import time # GLOBAL VARIABLE result = 1 # worker class in order to multiply on threads class Worker: # initiating the worker class def __init__(self): super().__init__() self.jobs = [] # the function that does the actual multiplying def multiplier(self,beg,end): global result for i in range(beg,end+1): result*= i #print("\tresult updated with *{}:".format(i),result) #print("Calculating from {} to {}".format(beg,end)," : ",result) # appending threads to the object def append_job(self,job): self.jobs.append(job) # function that is to see the threads def see_jobs(self): return self.jobs # initiating the threads def initiate(self): for j in self.jobs: j.start() # finalizing and joining the threads def finalize(self): for j in self.jobs: j.join() # controlling the threads by blocking them untill all threads are asleep def work(self): while True: if 0 == len([t for t in self.jobs if t.is_alive()]): self.finalize() break # this is the function to split the factorial into several threads def splitUp(n,t): # defining the remainder and the whole remainder, whole = (n-1) % t, (n-1) // t # deciding to tuple count tuple_count = whole if remainder == 0 else whole + 1 # empty result list result = [] # iterating beginning = 1 end = (n-1) // t for i in range(1,tuple_count+1): if i == tuple_count: result.append((beginning,n-1)) # if we are at the end, just append all to end else: result.append((beginning,end*i)) beginning = end*i + 1 return result if __name__ == "__main__": threads = 64 number = 743 splitted = splitUp(number,threads) worker = Worker() #print(worker.see_jobs()) s = time.time() # creating the threads for arg in splitted: thread = threading.Thread(target=worker.multiplier(arg[0],arg[1])) worker.append_job(thread) worker.initiate() worker.work() e = time.time() print("result found with {} threads in {} secs\n".format(threads,e-s)) if result % number == number-1: print("PRIME") else: print("NOT PRIME") """ -------------------- REPORT ------------------------ result found with 2 threads in 6.162530899047852 secs result found with 4 threads in 0.29897499084472656 secs result found with 16 threads in 0.009003162384033203 secs result found with 32 threads in 0.0060007572174072266 secs result found with 64 threads in 0.0029952526092529297 secs note that: these results may differ from machine to machine ------------------------------------------------------- """ Thanks in advance.
First and foremost, you have a critical error in your code that you haven't reported or tried to trace: ======================== RESTART: ======================== result found with 2 threads in 5.800899267196655 secs NOT PRIME >>> ======================== RESTART: ======================== result found with 64 threads in 0.002002716064453125 secs PRIME >>> As the old saying goes, "if the program doesn't work, it doesn't matter how fast it is". The only test case you've given is 743; if you want help to diagnose the logic error, please determine the minimal test and parallelism that causes the error, and post a separate question. I suspect that it's with your multplier function, as you're working with an ill-advised global variable in parallel processing, and your multiply operation is not thread-safe. In assembly terms, you have an unguarded region: LOAD result MUL i STORE result If this is interleaved with the same work from another process, the result is virtually guaranteed to be wrong. You have to make this a critical region. Once you fix that, you still have your speed problem. Factorial is the poster-child for recursion acceleration. I see two obvious accelerants: Instead of that horridly slow multiplication loop, use functools.reduce to blast through your multiplication series. If you're going to loop the program with a series of inputs, then short-cut most of the calculations with memoization. The example on the linked page benefits greatly from multiple-recursion; since factorial is linear, you'd need repeated application to take advantage of the technique.
Activity selection using Greedy Algorithm in Python
Given the problem, I have the following approach however, I am not able to to get all the test cases Problem Statement: A club has planned to organize several event. The volunteers are given a list of activities and the starting time and ending time of those activities. Write a python function that accepts the activity list, start_time list and finish_time list. The function should find out and return the list of maximum number of activities that can be performed by a single person. Assume that a person can work only on a single activity at a time. If an activity performed by a person ends at x unit time then he/she can take up the next activity which is starting at any time greater than or equal to x+1. def find_maximum_activities(activity_list,start_time_list, finish_time_list): activities = list(zip(activity_list, start_time_list, finish_time_list)) activities.sort(key = lambda x: x[2]) finish = 0 result = [] for i in activities: if finish <= i[1]: result.append(i[0]) finish = i[2] return result activity_list=[1,2,3,4,5,6,7] start_time_list=[1,4,2,3,6,8,6] finish_time_list=[2,6,4,5,7,10,9] result=find_maximum_activities(activity_list,start_time_list, finish_time_list) print("The maximum set of activities that can be completed:",result)
You are missing to update the finish variable. activities.sort(key=lambda x: x[1]) finish = -1 result = [] for i in activities: if finish <= i[0]: result.append(d[i]) finish = i[1] Try the above snippet.
I don't believe this is a greedy problem. IMO, it is a DP problem. Given an Activity you should've computed the answer for each activity that starts after this activity. So process the activities in decreasing order of start time. Therefore answer for a given activity will be 1 + max(Answer for all activity that start after this ends). Make max(Answer for all activity that start after this ends) an O(1) | O(log(n)) operation.
Algorithm to return all possible paths in this program to a nested list
So I have a game with a function findViableMoves(base). If i call this function at the start with the parameter base, I get an output [move1, move2 ... moven] which denotes all of the n viable moves that the user can perform give the state base. (there are in fact 2 root moves) Upon performing a move, say move2 - base gets changed depending on the move, the function gets called again and now we have an output for findViableMoves(base) of [move21,move22 .... move2n]. Depth-first-tree If you look at this diagram, it's a very similar situation - there's no looping back, it's just a normal tree. I need a program that performs a depth-first search (i think?) on all the possible moves given a starting state of base, and then returns then in a list as such: [[move1,move11,move111],[move1,move11,move112],....[moven,moven1,moven11],...] There will be more elements in these lists (14 at most), but I was just wondering if someone could provide any hints over how I can build an algorithm to do this? Efficiency doesn't really matter to me as there isn't too many paths, I just want it done for now.
I'm not 100% clear on what you're after, but if you have a list or similar iterable that is changing while the loop is happening you could try something like the below. This example allows the list and the loop condition to both remain dynamic during the loop execution. import random import sys import time changing_list = ['A', 27, 0.12] def perform_operation(changing_list, counter): sometimes_add_another_element_threshold = 0.6 if random.random() > sometimes_add_another_element_threshold: changing_list.append(random.random()) print(changing_list[counter]) def main(z=0): safety_limit = 100 counter = 0 condition = True while condition and counter < safety_limit: perform_operation(changing_list, counter) counter += 1 condition = counter<len(changing_list) print("loop finished") if __name__ == '__main__': start_time = time.time() main(int(sys.argv[1])) if len(sys.argv)>1 else main() print(time.time() - start_time) which provides output of varying length that looks something like: A 27 0.12 0.21045788812161237 0.20230442292518247 loop finished 0.0058634281158447266
Python Limit number of threads allowed
For the code segment below, I would like limit the number of running threads to 20 threads. My attempt at doing this seems flawed, because once the counter hits 20, it would just not create new threads, but those values of "a" would not trigger the do_something() function (which must account for every "a" in the array). Any help is greatly appreciated. count = 0 for i in range(len(array_of_letters)): if i == "a": if count < 20: count=+1 t = threading.Thread(target=do_something, args = (q,u)) print "new thread started : %s"%(str(threading.current_thread().ident)) t.start() count=-1
concurrent.futures has a ThreadPoolExecutor class, which allows submitting many tasks and specify the maximum number of working threads: with ThreadPoolExecutor(max_workers=20) as executor: for letter in array_of_letters): executor.submit(do_something, letter) Check more examples in the package docs.