I am learning SUMO from beggining, I read and learned most of tutorials from: http://sumo.dlr.de/wiki/Tutorials . What I want to do now is to make Cars slow down when there is a Traffic on a Road. I only know how to change the speed limit after a certain Time from here: http://sumo.dlr.de/wiki/Simulation/Variable_Speed_Signs . Do you know how can I change the speed limit when there is Traffic? I think that changing the value of speed Signs is the best Idea here, but I don't how do it.
There is no such thing as event triggered speed signs in SUMO, so you probably need to do that via TraCI. The easiest way is probably to start with the TraCI tutorial for trafficlights and then use functions like traci.edge.getLastStepOccupancy to find out whether vehicles are on the edge in question and then traci.edge.setMaxSpeed to set a new speed.
Related
So here is the context.
I created an script in python, YOLOv4, OpenCV, CUDA and CUDNN, for object detection and object tracking to count the objects in a video. I intend to use it in real time, but what real time really means? The video I'm using is 1min long and 60FPS originally, but the video after processing is 30FPS on average and takes 3mins to finish. So comparing both videos side by side, one is clearly faster. 30FPS is industry standard for movies and stuff. I'm trying to wrap my head around what real time truly means.
Imagine I need to use this information for traffic lights management or use this to lift a bridge for a passing boat, it should be done automatically. It's time sensitive or the chaos would be visible. In these cases, what it trully means to be real time?
First, learn what "real-time" means. Wikipedia: https://en.wikipedia.org/wiki/Real-time_computing
Understand the terms "hard" and "soft" real-time. Understand which aspects of your environment are soft and which require hard real-time.
Understand the response times that your environment requires. Understand the time scales.
This does not involve fuzzy terms like "quick" or "significant" or "accurate". It involves actual quantifiable time spans that depend on your task and its environment, acceptable error rates, ...
You did not share any details about your environment. I find it unlikely that you even need 30 fps for any application involving a road intersection.
You only need enough frame rate so you don't miss objects of interest, and you have fine enough data to track multiple objects with identity without mistaking them for each other.
Example: assume a car moving at 200 km/h. If your camera takes a frame every 1/30 second, the car moves 1.85 meters between frames.
How's your motion blur? What's the camera's exposure time? I'd recommend something on the order of a millisecond or better, giving motion blur of 0.05m
How's your tracking? Can it deal with objects "jumping" that far between frames? Does it generate object identity information that is usable for matching (association)?
For my first project in reinforcement learning I'm trying to train an agent to play a real time game. This means that the environment constantly moves and makes changes, so the agent needs to be precise about its timing. In order to have a correct sequence, I figured the agent will have to work in certain frequency. By that I mean if the agent has 10Hz frequency, it will have to take inputs every 0.1 secs and make a decision. However, I couldn't find any sources on this problem/matter, but it's probably due to not using correct terminology on my searches. Is this a valid way to approach this matter? If so, what can I use? I'm working with python3 in windows (the game is only ran in windows), are there any libraries that could be used? I'm guessing time.sleep() is not a viable way out, since it isn't very precise (when using high frequencies) and since it just freezes the agent.
EDIT: So my main questions are:
a) Should I use a certain frequency, is this a normal way to operate a reinforcement learning agent?
b) If so what libraries do you suggest?
There isn't a clear answer to this question, as it is influenced by a variety of factors, such as inference time for your model, maximum accepted control rate by the environment and required control rate to solve the environment.
As you are trying to play a game, I am assuming that your eventual goal might be to compare the performance of the agent with the performance of a human.
If so, a good approach would be to select a control rate that is similar to what humans might use in the same game, which is most likely lower than 10 Hertz.
You could try to measure how many actions you use when playing to get a good estimate,
However, any reasonable frequency, such as the 10Hz you suggested, should be a good starting point to begin working on your agent.
I'm producing an ugv prototype. The goal is to perform the desired actions to the targets set within the maze. When I surf the Internet, the mere right to navigate in the labyrinth is usually made with a distance sensor. I want to consult more ideas than the question.
I want to navigate the labyrinth by analyzing the image from the 3d stereo camera. Is there a resource or successful method you can suggest for this? As a secondary problem, the car must start in front of the entrance of the labyrinth, see the entrance and go in, and then leave the labyrinth after it completes operations in the labyrinth.
I would be glad if you suggest a source for this problem. :)
The problem description is a bit vague, but i'll try to highlight some general ideas.
An useful assumption is that labyrinth is a 2D environment which you want to explore. You need to know, at every moment, which part of the map has been explored, which part of the map still needs exploring, and which part of the map is accessible in any way (in other words, where are the walls).
An easy initial data structure to help with this is a simple matrix, where each cell represents a square in the real world. Each cell can be then labelled according to its state, starting in an unexplored state. Then you start moving, and exploring. Based on the distances reported by the camera, you can estimate the state of each cell. The exploration can be guided by something such as A* or Q-learning.
Now, a rather subtle issue is that you will have to deal with uncertainty and noise. Sometimes you can ignore it, sometimes you don't. The finer the resolution you need, the bigger is the issue. A probabilistic framework is most likely the best solution.
There is an entire field of research of the so-called SLAM algorithms. SLAM stands for simultaneous localization and mapping. They build a map using some sort of input from various types of cameras or sensors, and they build a map. While building the map, they also solve the localization problem within the map. The algorithms are usually designed for 3d environments, and are more demanding than the simpler solution indicated above, but you can find ready to use implementations. For exploration, something like Q-learning still have to be used.
this is a question that I don't know where to search for an answer. I have a Python program that has too many calculations, for example, consider the DFS algorithm with branch factor equals 62 and depth equal to 20. If I run this program on my pc I would take ages to be completed. But is there any website that gives me resources to do this job? Like I put my code in it and run it then two days later check the results.
I'm aware that maybe this question flagged as spam or anything like that, but thanks for your help anyway!
UPDATE
I investigate further on this little question. What I really want is a Cloud Computing but for free!!
I doubt that cloud computing could help you as it might take ages in cloud as well. Cloud computing should be used when your code can be efficiently parallelised into problems with reasonable complexity. You can parallelise DFS (say, based on the choice of the first branch) but you still left with the problem of almost the same size. You should consider optimising/approximating your calculations to be runnable with restricted resources.
I am trying to use TraCI commands in "runner.py" file and in the wiki for TraCI the commands are presented in octal format or so.
How to configure the behavior of vehicle in "runner.py" file ?
Can we change the parameters of the vehicle dynamically (change in speed during simulation) ?
Change the speed of the specified vehicle(s) to the given value over the given amount of time in milliseconds (increase/decrease to decrease/increase speed). I guess that would happen only using TraCI commands.If so, in what format i can use those commands?
If there is traffic on the current lane, the vehicle should be able to switch to next lane accordingly.
How to control the vehicles not to undergo random lane change ?
I would really appreciate if someone could help me to sort out this.
Thanks in advance
It is possible to adapt the vehicle speed. In the python client the function is called traci.vehicle.slowDown and needs the vehicle id, the new speed and the duration as parameter. For a somewhat better documentation of the traci python commands look here: http://sumo.dlr.de/pydoc/traci.vehicle.html
Lane changing is not affected by this call and happens as usual. Note however that you will not be able to increase the speed by this function because the vehicle already drives at the safest maximum speed. If this is limited by the vehicle's own maximum speed you may adapt this one using traci.vehicle.setMaxSpeed.
Vehicles do not change lanes randomly, they have always a reason to do so. You have limited control over this behavior using the http://sumo.dlr.de/pydoc/traci.vehicle.html#-setLaneChangeMode function. An explanation of the bits is here: http://sumo.dlr.de/wiki/TraCI/Change_Vehicle_State#lane_change_mode_.280xb6.29