When running run_parallel_processing() with the following configs:
Config file loaded and logger created.
18 scenarios skipped.
ScenarioLoader Mode <DEFAULT>
Run on all scenarios, except the skipped scenarios
Number of scenarios: 500
Automaton loaded for vehicle type: BMW_320i
Motion planner: student_example
Number of parallel processes: 6
After 246 Scenarios were processed, I had 15 passed, 14 failed and 208 time-outs. Then I got OSError: [Errno 24] Too many open files and a crash.
Why are so many of my scenarios timing out, even though with student_example.py around 320 are supposed to work? I used local installation, might that be a problem?
thanks for your post! Could you try with the default Astar planner in the batch_processing_config? We tested it with this planner and ~320 could already be solved (as described in the exercise description). Let us know in case it doesn’t work similarly for you. The student_example.py is just given as an example on how to implement your own solution in student.py and is not provided as a particularly good planner ;).
thanks for the quick response.
With astar I’m getting 137 solutions 14 fails and 87 time-outs out of 246, then at the same step as before it crashes:
OMP: Error #179: Function Can’t open SHM2 failed:
OMP: System error #24: Too many open files
But this means astar is on track to find about 320
Any ideas why the program is crashing? Also is there a better way to debug the heuristic function than running run_parallel_processing() every time?
it seems like it is working in principle. We have never encountered this error before; on which platform are you running the code (you mentioned you installed it locally)
I’m using a 2,6 GHz 6-Core Intel Core i7 with MacOS 13.0.1 (I already tried to use less than 6 Cores in parallel, but that’s not the issue - there are cores in the GPU as well, with my RL projects on CommonRoad I can run 9 simulations in parallel).
I assume if I can’t fix the issue, I don’t have a choice but to just use a different platform, correct?
ok your setup should be fine, since we have also tested the software both on MacM1 and Mac with Intel Chips. I briefly googled the error and it seems like a specific error related the multiprocessing package used in parallel processing. Maybe this post also from a Mac User helps (python - Multiprocess and Multiprocessing with no file IO: OSError: [Errno 24] Too many open files - Stack Overflow)? I will also ask my colleague to test the parallel processing again on MacOS.
Regarding your previous question about debugging? For debugging purposes we have the sequential processing. However it is not advisible to run all 500 scenarios in sequential processing, as this takes extremely long. Instead, one could list some scenarios one wants to debug (e.g., currently failed scenarios) and debug them in sequential processing
I checked the exercise guide and find that there is no explicit restriction on the timeout. But please make sure you also submit the modified configuration file (.yaml) if you want to participate in the bonus challenge. At the moment I didn’t confirm this with Gerald so it might subject to changes.
Another sugguestion from my experience in last year (If I didn’t remember it wrong): A* with a higher timeout value won’t help so much. Based on that, it wil result in a longer waiting time without much improvement. You could check if it is truely this case.