Vehicle Speed change (runner.py file) in SUMO Simulation - python

I am trying to use TraCI commands in "runner.py" file and in the wiki for TraCI the commands are presented in octal format or so.
How to configure the behavior of vehicle in "runner.py" file ?
Can we change the parameters of the vehicle dynamically (change in speed during simulation) ?
Change the speed of the specified vehicle(s) to the given value over the given amount of time in milliseconds (increase/decrease to decrease/increase speed). I guess that would happen only using TraCI commands.If so, in what format i can use those commands?
If there is traffic on the current lane, the vehicle should be able to switch to next lane accordingly.
How to control the vehicles not to undergo random lane change ?
I would really appreciate if someone could help me to sort out this.
Thanks in advance

It is possible to adapt the vehicle speed. In the python client the function is called traci.vehicle.slowDown and needs the vehicle id, the new speed and the duration as parameter. For a somewhat better documentation of the traci python commands look here: http://sumo.dlr.de/pydoc/traci.vehicle.html
Lane changing is not affected by this call and happens as usual. Note however that you will not be able to increase the speed by this function because the vehicle already drives at the safest maximum speed. If this is limited by the vehicle's own maximum speed you may adapt this one using traci.vehicle.setMaxSpeed.
Vehicles do not change lanes randomly, they have always a reason to do so. You have limited control over this behavior using the http://sumo.dlr.de/pydoc/traci.vehicle.html#-setLaneChangeMode function. An explanation of the bits is here: http://sumo.dlr.de/wiki/TraCI/Change_Vehicle_State#lane_change_mode_.280xb6.29

Related

What does it really mean real time object detection?

So here is the context.
I created an script in python, YOLOv4, OpenCV, CUDA and CUDNN, for object detection and object tracking to count the objects in a video. I intend to use it in real time, but what real time really means? The video I'm using is 1min long and 60FPS originally, but the video after processing is 30FPS on average and takes 3mins to finish. So comparing both videos side by side, one is clearly faster. 30FPS is industry standard for movies and stuff. I'm trying to wrap my head around what real time truly means.
Imagine I need to use this information for traffic lights management or use this to lift a bridge for a passing boat, it should be done automatically. It's time sensitive or the chaos would be visible. In these cases, what it trully means to be real time?
First, learn what "real-time" means. Wikipedia: https://en.wikipedia.org/wiki/Real-time_computing
Understand the terms "hard" and "soft" real-time. Understand which aspects of your environment are soft and which require hard real-time.
Understand the response times that your environment requires. Understand the time scales.
This does not involve fuzzy terms like "quick" or "significant" or "accurate". It involves actual quantifiable time spans that depend on your task and its environment, acceptable error rates, ...
You did not share any details about your environment. I find it unlikely that you even need 30 fps for any application involving a road intersection.
You only need enough frame rate so you don't miss objects of interest, and you have fine enough data to track multiple objects with identity without mistaking them for each other.
Example: assume a car moving at 200 km/h. If your camera takes a frame every 1/30 second, the car moves 1.85 meters between frames.
How's your motion blur? What's the camera's exposure time? I'd recommend something on the order of a millisecond or better, giving motion blur of 0.05m
How's your tracking? Can it deal with objects "jumping" that far between frames? Does it generate object identity information that is usable for matching (association)?

Understanding Depth-First Branch and Bound implementation to StarCraft 2

The problem is that I'm finding it difficult to understand how DFBB works, what the parameters and output should be for this case.
I'm working on creating an AI for the game StarCraft 2 that will handle the build order in the game (for team Terran). I was planning to follow the approach described in the link (see below) which followed a very similar thing that I was going for. To summarize what I'm planning to do:
A list of different type of buildings that need to be built will be given to me. Buildings cost minerals and gas (this is the currency in the game), some buildings have prerequisites (meaning other buildings need to be built before it's possible to build it) and they take a certain amount of time to build.
In the article they used Depth-First Branch and Bound to figure out the optimal build order, meaning the fastest way possible to build the buildings in that list. This was their pseudocode:
Where the state S is represented by S = (current game time, resources available, actions in progress but not completed, worker income data). How S´ is derived is described article and it is done through three functions so that bit I understand.
As mentioned earlier I'm struggling to understand what the starting status S, goal G, time limit t and bound b should be represented by in the pseudocode that they are describing.
I only know three things for sure: the list of buildings that needs to be built, what consumables I have at the moment (minerals and gas), resources (that is buildings I already have in the game). This should then be applied to the algorithm somehow, but it is unclear what the input should be to the function. The output should be a list sorted in the right order so if I where to building the buildings in the order they come in then it should all work out and it should be the optimal possible time it can be done in.
For example should I iterate through the list buildings and run DFBB on every element with the goal then being seeing if the building can be built. But what should the time limit be set too and what does bound mean in this case? Is it simply the cost?
Please explain how this function should be run on the list in order to find the optimal path of building it. The article is fairly easy to read, but I need some help understanding how it is meant to work and how I can apply it to my problem.
Link to article: https://ai.dmi.unibas.ch/research/reading_group/churchill-buro-aiide2011.pdf
Starting Status S is the initial state at the start of the game. I believe you have 100 minearls and Command center and 12? SCVs, so that's your start.
The Goal here is the list of building you want to have. The satisfies condition is are all building in goal also in S.
The time limit is the amount of time you are willing to spend to get the result. If yous set it to 5 seconds it will probably give you a sub-optimal solution, but it will do it in 5 seconds. If the algorithm finishes the search it will return earlier. If you don't care leave it out, but make sure you write solutions to a file in case something happens.
Bound b is the in-game time limit for building everything. You initially set it to infinite or some obvious value (like 10 minutes?). When you find a solution the b gets updated so every new solution you find MUST be faster (in-game) than the previous one.
A few notes. Make sure that the possible action (children in step 9) includes doing nothing (wait for more resources) and building an SCV.
Another thing that might be missing is a correct modelling of SCV movement speed. The units need to move to a place to build something and it also takes time for them to get back to mining.

Time step in reinforcement learning

For my first project in reinforcement learning I'm trying to train an agent to play a real time game. This means that the environment constantly moves and makes changes, so the agent needs to be precise about its timing. In order to have a correct sequence, I figured the agent will have to work in certain frequency. By that I mean if the agent has 10Hz frequency, it will have to take inputs every 0.1 secs and make a decision. However, I couldn't find any sources on this problem/matter, but it's probably due to not using correct terminology on my searches. Is this a valid way to approach this matter? If so, what can I use? I'm working with python3 in windows (the game is only ran in windows), are there any libraries that could be used? I'm guessing time.sleep() is not a viable way out, since it isn't very precise (when using high frequencies) and since it just freezes the agent.
EDIT: So my main questions are:
a) Should I use a certain frequency, is this a normal way to operate a reinforcement learning agent?
b) If so what libraries do you suggest?
There isn't a clear answer to this question, as it is influenced by a variety of factors, such as inference time for your model, maximum accepted control rate by the environment and required control rate to solve the environment.
As you are trying to play a game, I am assuming that your eventual goal might be to compare the performance of the agent with the performance of a human.
If so, a good approach would be to select a control rate that is similar to what humans might use in the same game, which is most likely lower than 10 Hertz.
You could try to measure how many actions you use when playing to get a good estimate,
However, any reasonable frequency, such as the 10Hz you suggested, should be a good starting point to begin working on your agent.

SUMO Simulation. Detecting high Traffic and reducing the speed limit

I am learning SUMO from beggining, I read and learned most of tutorials from: http://sumo.dlr.de/wiki/Tutorials . What I want to do now is to make Cars slow down when there is a Traffic on a Road. I only know how to change the speed limit after a certain Time from here: http://sumo.dlr.de/wiki/Simulation/Variable_Speed_Signs . Do you know how can I change the speed limit when there is Traffic? I think that changing the value of speed Signs is the best Idea here, but I don't how do it.
There is no such thing as event triggered speed signs in SUMO, so you probably need to do that via TraCI. The easiest way is probably to start with the TraCI tutorial for trafficlights and then use functions like traci.edge.getLastStepOccupancy to find out whether vehicles are on the edge in question and then traci.edge.setMaxSpeed to set a new speed.

How to figure out multilateration with xyz positions of each post and difference in time?

I'm having some issues figuring out multilateration. I'll start by saying I'm not a math whiz, but I am usually able to figure most things out, but this one has confused me. I got to this point after reading up on Time Difference of Arrival.
I have four wifi adapters. Each one takes a point in a three sided pyramid, so this should be able to take height into account, I believe. The relative positions to each other are fixed as well.
What I'm attempting to do is listen for wifi signals and find their origin. In theory, I believe I should be able to use the difference in time between each wifi adapter "hearing" a packet to find the origin of the packet.
I've paired a GPS into this. It allows me to give each wifi adapter an actual position (with a little math).
So here's what I have when I receive a packet:
wlan1 (X, Y, Z, timestamp)
wlan2 (X, Y, Z, timestamp)
wlan3 (X, Y, Z, timestamp)
wlan4 (X, Y, Z, timestamp)
X and Y are lat/lng. Z is the altitude in meters, and the timestamp is reflecting microseconds.
Some assumptions to make are that the XYZ are accurate. For all practical purposes, if they're off, then they're all consistently off, which should be reflected in finding the source.
I haven't been able to figure out how to apply any math to this, and am seeking an example. I can provide some actual data if necessary. The end goal is working on a robotics project that'll let a robot follow you, or more accurately your cell phone. The reason I'm taking this approach is that it lets me log things in a way that in the end should be extremely easy to debug visually on a Google Map.
I believe that by taking a difference in time from each point and comparing it across the adapters, I should be able to have a somewhat accurate shot at the origin location, but this math is just too far beyond me right now.
I have cross-posted this question to the Mathematics site.
There are various algorithms for this, I found a simple paper here that looks helpful, but there are also more advanced least-squares algorithms in various journals.
Just as a warning, multilateration is very sensitive to position errors of the sensors and errors in the time difference of arrival. So your results might not be particularly good -- you've said your clocks are not synchronized (they need to be) and that you are using GPS for location (which have a ±3 m error). For what it's worth, you can use GPS for time too, but I'm not sure of the error on that.
A couple of (unfortunately negative) points:
If your timestamps are computed when the signal hits the antennae then all you'll be able to work out is the direction to the source and not the distance. After all, a signal the comes from a million miles away will have the same propagation delay between 2 antenna as one that comes from a meter away.
Unless your robot is very large I would be surprised if the deltas between the timestamps were not completely dominated by factors other than signal propagation delay. EM radiation goes quite quickly, so there is very little room for error. For example:
the wifi adapters will have some kind of onboard processing firmware - how quickly does it report new signals? Is the delay constant or does it depend on arcane details of the 802.11 spec? Will you be notified of the arrival of a signal, or the arrival of a complete packet which may have been the result of a whole series of acks and retransmissions?
Your device is linked to the adapters via some kind of IO bus - Even if we assume that the adapters are perfect there's going to be contention on this bus when a new pulse is received - which adapter wins and gets processed first?
Your device may have a single-core CPU - how quickly can a signal from an adapter be processed and given a timestamp? The delay between events will determine the fidelity of your timestamps, and thus the maximum accuracy of the system.
Is the device completely to-the-metal dedicated to putting timestamps on signals, or is there other software running too? What if some other event pre-empts your signal processing?
If you're in an indoor environment you will get indirect propagation - assuming that the system itself is perfect, how do you detect the case where the signal detected on one adapter took a longer path by bouncing off a wall or two?

Categories

Resources