I figured the CLCS is being created here, as part of the constructor of every metric (such as TTC) and that, according to the commonroad_dc docs, its default value for project domain limit is 20 meters. I assume thatâs the problem in my case, so Iâd like to increase the limit (is it as simple as that?). However, after following this tutorial and having dove into the code, I couldnât find any way to pass a custom CLCS instance or even only instantiate and provide a particular metric (e.g. TTC) myself. Can CriMe only evaluate scenarios where two actors are no further than 20 meters apart?!
Would appreciate some help here. Also, could you go a bit into what this curve linear thing is even needed for and why it can be constructed from some arbitrary linestring? Thanks!
EDIT: As a temporary solution, I did some manual edits to the CriMe code and set the domain limit to 100,000 meters, but am still getting that error, so this wasnât probably the issue.
Thank you for using CriMe and for your question. Indeed, youâve understood correctly: the projection of global coordinates to curvilinear ones can be problematic due to the non-unique mapping of the projection. Currently, one of my colleagues is working on enhancing the commonroad_dc to improve the robustness of this projection. This issue tends to be scenario-dependent since the reference path varies from case to case.
I apologize for the inconvenience caused by the interface for updating the clcs not functioning as expected. We have addressed this issue in the latest develop branch. Now, you can update the clcs by using config.update(CLCS=<your newly defined CLCS>).
Regarding the 20 meters, which is the lateral range that can be used for coordinate transformation, itâs generally larger than the sensor range of the ego vehicle. While awaiting the release of the new commonroad-dc by my colleague, I suggest updating the parameters as described on here:
Hereâs how you can do it:
We appreciate your patience and hope these instructions help you proceed with your work. Please feel free to reach out if you have any more questions or need further assistance.
However, Iâm still struggling to fully grasp the meaning of the CLCS. My current understanding is the following, but I might be completely mistaken: computing HW, TTC, etc. requires a spherical / geodetic coordinate system (why is that even?), but CommonRoad scenarios usually use Cartesian / projected coordinates. The CLCS constructs the former from the latter, given the reference path, but itâs only valid within some limited spatial region. Is that correct so far?
If yes, my follow-up question is: if the coordinates used in my scenario are represented in an actual, global coordinate reference system (such as EPSG:3857), for which a bijective projection to a spherical system (such as EPSG:4326) exist, can I just use that mapping instead?
Also, my scenarios - since not recorded from ego vehicle sensors but from infrastructure sensory (RSUs, etc.) - typically span far more than 20 meters around the ego. What would be the way to go to still compute criticality for them? How would I have to define my CLCS?
Sure. Yes, CommonRoad scenarios are described based on the global coordinate system. However, in the situation described below, with curly road geometry, the Euclidean distance does not accurately represent the headway distance, as it fails to illustrate the ârealisticâ relationship between two vehicles, which follows the lane structure.
âThe CLCS constructs the former from the latter, given the reference path, but itâs only valid within some limited spatial regions.â This part is correct. The limitation of the spatial region is due to the problem I described earlier.
I believe your projection differs from the one shown in the figure, as it is not lane-based but rather region-based.
The 20 meters pertain to the lateral deviations along the reference path. I assume that if it is more than 20 meters laterally away from it, the criticality is significantly lower. Also, if the road is not so curly, projection domain > 20m laterally should not be a problem. The CLCS should be constructed based on the reference path, as illustrated in the figure. For more details regarding CLCS, I would refer you to our tutorial CommonRoad.
Thank you so much for this great explanation, it helped me a lot in my understanding! Two last questions, if you donât mind.
First, following up on your above example, how would the distance between two vehicles on different lanelets - possibly part of a very curvy road - be computed by CriMe? Take this scene as an example, comprising two dynamic obstacles and three lanelets (thus three CLCS by default):
And second, does CriMe attempt to compute distance (e.g. how âHeadwayâ) between every possible pair of vehicles? Or only those occupying the same lanelet? Or only such within a certain spatial range? Whatâs the default behavior there?
Possibly you could even have a quick look at why the above error is occurring for the ZAM_Tjunction-1_307_T-1 scenario (among others) and how to work around that?
Sure. For the example you provided, we adhere to the lanelet structure to construct the reference path, as detailed here: https://github.com/CommonRoad/commonroad-crime/blob/develop/commonroad_crime/utility/general.py#L49. This process involves connecting the successors and predecessors of the current lanelet that the ego vehicle occupies. Then, the center vertices of the concatenated lanelet are used to construct the CLCS. In other words, only one CLCS is established.
Regarding your second question, it depends on how you use CriMe:
However, the logic behind HW and TTC begins by checking whether the vehicles are in the same lanelet. If not, the distance is set to np.inf. We believe this simplification makes sense because if the vehicles are not spatially close, the criticality value should be very low. However, you can update the source code to suit your use case, e.g., by defining a new measure or adapting the existing one.
For your use case, which vehicle did you designate as the ego? Unfortunately, I cannot reproduce your error message without more information.
I think, depending on the structure of the map, only considering vehicles within the same lanelet might be a might too lax, no? Especially if two vehicles are at the end and beginning of two adjacent lanelets respectively, their actual distance might be very close, but theyâll still end up with ttc = np.inf.
For the example, this is an excerpt from my code which should suffice to reproduce the error:
scenario_id: str = 'ZAM_Tjunction-1_307_T-1'
ego_id: int = CommonRoadFileReader(f'../data/{scenario_id}.xml').open()[0].dynamic_obstacles[0].obstacle_id # first obstacle is considered ego
# dynamically generated yaml config in mem
with tempfile.NamedTemporaryFile('w', suffix='.yaml') as f:
f.write(f'vehicle:\n ego_id: {ego_id}')
f.flush()
config = CriMeConfiguration.load(f.name, scenario_id)
config.general.path_scenarios = '../data/'
config.debug.save_plots = False
config.update()
config.print_configuration_summary()
timesteps: int = config.scenario.obstacle_by_id(config.vehicle.ego_id).prediction.final_time_step
crime_interface = CriMeInterface(config)
crime_interface.evaluate_scenario([TTC, WTTC, PET, ALongReq, ALatReq, LongJ, LatJ], time_end=147, verbose=False)
Yields:
ValueError: <CurvilinearCoordinateSystem/convertToCurvilinearCoordsAndGetSegmentIdx> Coordinate outside of projection domain.
thanks a lot. Yes, you are absolutely right, beforehand it was not computed very precisely. I now updated the computation of in_same_lanelet by considering the predecessors and successors of the occupying lanelet, see the latest fix-clcs-error branch, which would be merged to develop branch soon.
The error is now also handled by a try-catch block. Regarding your measures, here are the updated results:
Sorry to bother you again, but another hurdle that I came across is this. When computing metrics between two vehicles (ego and other), while the other exits the scenario early (other.final_time_step < ego.final_timestep), the whole program crashes with the following error:
Thank you very much for bringing this to our attention! Indeed, prior to your observation, the handling of missing states for both the ego vehicle and other vehicles was not adequately addressed. Iâve implemented a function for each measure that checks the attributes of the involved vehicles before the evaluation begins. You can find the update through this link. It took me some time to update all the measures, but you should now be able to see the changes in the develop branch.
Thank you so much! The only problem I have left now is the fact that CriMe seems to be super memory hungry. For a scenario with 95 dynamic obstacles and 399 timesteps, computing a bunch of criticality scores fills up my entire 32 GB of RAM after only ~ 130 timesteps before it crashes. Perhaps something is leaking memory somewhere?
Also, it seems like once there exists an obstacle for a given timestep, that doesnât actually have a state for that timestep, the whole metric will become nan. I ran a scenario that features the above case (opponent vehicles âexitingâ the scene before the ego or âenteringâ the scene after the ego) and all metrics ended up as nan for all timesteps. Instead, I think obstacles, that donât exist at a certain timestep, should only be excluded from the computation of that timestep instead of rendering the whole metric nan.
Thank you very much for mentioning this. I will take a look at it! In the meantime, I suggest you evaluate a few measures at a time. Will keep you updated when we find the causes of the issue.
For the second issue, your perspective also makes sense. The rationale behind setting the value to NaN is to clearly indicate that an entry is missing during the visualization of the measure. Subsequently, one can employ postprocessing techniques to filter out these NaN values, especially if warnings are provided by us already. What do you think?
Subsequently, one can employ postprocessing techniques to filter out these NaN values, especially if warnings are provided by us already.
Not exactly sure how a postprocessing on the basis of the warnings could help here. The evaluate_scene() will always iterate through all obstacles (which makes sense, of course), so I donât see a way to handle the error of a single one. Take the following example:
Ego (ID 1) has 0 <= t <= 5
Agent A (ID 2) has 0 <= t <= 5 as well
Agent B (ID 3) has 3 <= t <= 5
Running evaluate_scene() will still yield nan for all t < 3 (because Agent B only enters the scene at t=3), even though you could well compute the criticality for the scenes at t â [0, 1, 2] that only consist of Ego and Agent A. Or am I making a mistake here?
Thank you so much for providing the example!! You are absolutely right; we didnât cover this in our unit tests. I have now modified the compute_criticality function to compute the criticality across all obstacles using the following function, which handles the NaNs in the criticality lists (but not merged to the develop branch yet):
if len([c for c in criti_list if c is not None]) > 0:
if np.all(np.isnan(criti_list)):
utils_log.print_and_log_warning(
logger,
f"* Due to the missing entries, all elements are NaN, "
f"the result for time step {time_step} is NaN",
)
return np.nan
# Not all elements are NaN, return the max/min of the non-NaN values
if self.monotone == TypeMonotone.POS:
criti = np.nanmax(criti_list)
else:
criti = np.nanmin(criti_list)
Do you think this approach is more sensible, or would it be better to exclude obstacles that do not exist in the scenario? I felt that utilizing NaNs makes the handling more universal and also accommodates scenarios where the ego vehicle might not exist in certain time steps. Iâm eager to hear your feedback!
By the way, regarding the memory issues, I presume you are using the P_MC measure in your evaluation? If thatâs the case, it might be due to all the simulated vehicle states being stored during the evaluation for subsequent visualization. I would address this issue along with the previous one to the develop branch.