Error while running batch_processing.ipynb

Thanks for the reply. I downloaded the 27/11 vm but facing this error. When I run any ipynb the memory just gets full and seems like there is no option to increase the memory allocated to that virtual machine. Kindly help!!

Hi, you should be able to adjust the memory in the setting panel of Virtual Box, under ‘system’.

You could also try to change the value for max_tree_depth in batch_processing_config.yaml.

1 Like

Well, I tried but did not work! Maybe wil figure somehting out !!

Have to check that, will do thanks @jkljkljkl

Also, just one more doubt, when we run tutotial_commonroad_search.ipynb for one scenario, if we find a solution irrespective of the cost, we can term that as 1 of those 110 scenarios right ?

Yes, that’s acceptable.

Thanks a lot! One more out of the topic thing ,is there a way we could use some faster desktop in some lab or anywhere as my laptop just hangs most of the time after running a few scenarios ? Tried using Linux and Windows using Docker and VM both !! Thanks

1 Like

Or use the function find_all_colliding_objects to drastically improve collision checking performance. :slight_smile: Python loops are slow.

2 Likes

Vitali thank you for your hint, where do we can find this function ?

I am not sure, but maybe you can have a look at the compute room on the ground floor of Informatic Department?

I had a look, only 5% of the time is spent on collision detection. Everything else is trajectory feasibility checking together with optimization, implemented in python. The actual time on a powerful machine is given here.

https://commonroad.in.tum.de/documentation/tutorials/commonroad_search/tut_01/cr_search_motion_primitives

It could be possible to go down to 5+5 sec per scenario configuring search algorithm parameters.

State-of-the-art algorithms have close-to-realtime performance. Search is search.

The compute room on the ground floor has very slow thin clients.

Use Google Colaboratory instead - limited to 12 hours or less, but powerful CPUs. You can run several instances in parallel. :slight_smile: Configure once and upload binaries to git - or use Docker.

So how do we install it on colab? Are there any steps? Bcos is not just a normal installation

1 Like

Hi, When I run gbfs_only_time, I get 0 solutions with 120 timeout. I latest pulled the on the weekend. Any idea what could go wrong or how to run that particular .py file ?

Hi, I just checked again. It should be working.
Here is what I have done:

  1. download the virtual machine image
  2. change the planner id in batch processing configuration file to 3
  3. change MotionPlanner_gbfs_only_time.py to MotionPlanner.py (you can also change the code in batch_search.py to read MotionPlanner_gbfs_only_time.py instead)
  4. run batch processing
  5. get results

If it still doesn’t work for you, you might try increasing the timeout time to see if the situation change.

2 Likes

It works. Thanks! However, @jkljkljkl , the same motion_planner_gbfs_only_time does not work in multiprocessing batch processing. It gives 0 solutions with this, howver it worked with motion_planner_gbfs. :slight_smile:

I have not used the one provide by jkljkljkl, but I suppose by changing the name temporarily from motion_planner_gbfs_only_time to motion_planner_gbfs should also work with his approach.

I did try all the permmutations with the names! Weird error!

Hi!
I just made a new multicore batch processing notebook. You could download it and run it locally.

I only changed the planner id in the config file, renamed the files, restarted the kernel - and it worked.

Tested with both motion planners, worked well.