Coordinate outside of projection domain error

I keep getting the following error when trying to evaluate a scenario such as ZAM_Tjunction-1_307_T-1.

ValueError: <CurvilinearCoordinateSystem/convertToCurvilinearCoordsAndGetSegmentIdx> Coordinate outside of projection domain.

Same error for scenarios that I created myself, where the coordinates are in stored as EPSG:3857 global coordinates.

Could somebody please explain to me what this error is about and how to resolve it?

I figured the CLCS is being created here, as part of the constructor of every metric (such as TTC) and that, according to the commonroad_dc docs, its default value for project domain limit is 20 meters. I assume that’s the problem in my case, so I’d like to increase the limit (is it as simple as that?). However, after following this tutorial and having dove into the code, I couldn’t find any way to pass a custom CLCS instance or even only instantiate and provide a particular metric (e.g. TTC) myself. Can CriMe only evaluate scenarios where two actors are no further than 20 meters apart?! :thinking:

Would appreciate some help here. Also, could you go a bit into what this curve linear thing is even needed for and why it can be constructed from some arbitrary linestring? Thanks!

EDIT: As a temporary solution, I did some manual edits to the CriMe code and set the domain limit to 100,000 meters, but am still getting that error, so this wasn’t probably the issue.

Dear Ferdinand,

Thank you for using CriMe and for your question. Indeed, you’ve understood correctly: the projection of global coordinates to curvilinear ones can be problematic due to the non-unique mapping of the projection. Currently, one of my colleagues is working on enhancing the commonroad_dc to improve the robustness of this projection. This issue tends to be scenario-dependent since the reference path varies from case to case.

I apologize for the inconvenience caused by the interface for updating the clcs not functioning as expected. We have addressed this issue in the latest develop branch. Now, you can update the clcs by using config.update(CLCS=<your newly defined CLCS>).

Regarding the 20 meters, which is the lateral range that can be used for coordinate transformation, it’s generally larger than the sensor range of the ego vehicle. While awaiting the release of the new commonroad-dc by my colleague, I suggest updating the parameters as described on here:
Here’s how you can do it:

curvilinear_cosy = CurvilinearCoordinateSystem(new_ref_path, 20, 0.1, 5.0)
config.update(CLCS=curvilinear_cosy)

We appreciate your patience and hope these instructions help you proceed with your work. Please feel free to reach out if you have any more questions or need further assistance.

best regards,
Yuanfei

1 Like

Thanks for your very detailed and helpful reply!

However, I’m still struggling to fully grasp the meaning of the CLCS. My current understanding is the following, but I might be completely mistaken: computing HW, TTC, etc. requires a spherical / geodetic coordinate system (why is that even?), but CommonRoad scenarios usually use Cartesian / projected coordinates. The CLCS constructs the former from the latter, given the reference path, but it’s only valid within some limited spatial region. Is that correct so far?

If yes, my follow-up question is: if the coordinates used in my scenario are represented in an actual, global coordinate reference system (such as EPSG:3857), for which a bijective projection to a spherical system (such as EPSG:4326) exist, can I just use that mapping instead?

Also, my scenarios - since not recorded from ego vehicle sensors but from infrastructure sensory (RSUs, etc.) - typically span far more than 20 meters around the ego. What would be the way to go to still compute criticality for them? How would I have to define my CLCS?

Sure. Yes, CommonRoad scenarios are described based on the global coordinate system. However, in the situation described below, with curly road geometry, the Euclidean distance does not accurately represent the headway distance, as it fails to illustrate the “realistic” relationship between two vehicles, which follows the lane structure.

“The CLCS constructs the former from the latter, given the reference path, but it’s only valid within some limited spatial regions.” This part is correct. The limitation of the spatial region is due to the problem I described earlier.

I believe your projection differs from the one shown in the figure, as it is not lane-based but rather region-based.

The 20 meters pertain to the lateral deviations along the reference path. I assume that if it is more than 20 meters laterally away from it, the criticality is significantly lower. Also, if the road is not so curly, projection domain > 20m laterally should not be a problem. The CLCS should be constructed based on the reference path, as illustrated in the figure. For more details regarding CLCS, I would refer you to our tutorial CommonRoad.

1 Like

Thank you so much for this great explanation, it helped me a lot in my understanding! Two last questions, if you don’t mind.

First, following up on your above example, how would the distance between two vehicles on different lanelets - possibly part of a very curvy road - be computed by CriMe? Take this scene as an example, comprising two dynamic obstacles and three lanelets (thus three CLCS by default):

clcs

And second, does CriMe attempt to compute distance (e.g. how “Headway”) between every possible pair of vehicles? Or only those occupying the same lanelet? Or only such within a certain spatial range? What’s the default behavior there?

Possibly you could even have a quick look at why the above error is occurring for the ZAM_Tjunction-1_307_T-1 scenario (among others) and how to work around that?

Sure. For the example you provided, we adhere to the lanelet structure to construct the reference path, as detailed here: https://github.com/CommonRoad/commonroad-crime/blob/develop/commonroad_crime/utility/general.py#L49. This process involves connecting the successors and predecessors of the current lanelet that the ego vehicle occupies. Then, the center vertices of the concatenated lanelet are used to construct the CLCS. In other words, only one CLCS is established.

Regarding your second question, it depends on how you use CriMe:

  1. To compute the criticality for every pair of vehicles, you should use the function compute_criticality, which is also embedded in the evaluate_scene and evaluate_scenario functions ( evaluate_scenario_I.py, evaluate_scene_I_II.py). See the usage here: https://github.com/CommonRoad/commonroad-crime/blob/develop/commonroad_crime/data_structure/base.py#L206.
  2. To compute for a target vehicle, you need to instantiate a HW object and use its compute function, specifying the target vehicle ID. See an example here: https://github.com/CommonRoad/commonroad-crime/blob/develop/tests/test_distance_domain.py#L44.

However, the logic behind HW and TTC begins by checking whether the vehicles are in the same lanelet. If not, the distance is set to np.inf. We believe this simplification makes sense because if the vehicles are not spatially close, the criticality value should be very low. However, you can update the source code to suit your use case, e.g., by defining a new measure or adapting the existing one.

For your use case, which vehicle did you designate as the ego? Unfortunately, I cannot reproduce your error message without more information.

1 Like

Thanks again for the reply!

I think, depending on the structure of the map, only considering vehicles within the same lanelet might be a might too lax, no? Especially if two vehicles are at the end and beginning of two adjacent lanelets respectively, their actual distance might be very close, but they’ll still end up with ttc = np.inf :thinking:.

For the example, this is an excerpt from my code which should suffice to reproduce the error:

scenario_id: str = 'ZAM_Tjunction-1_307_T-1'
ego_id: int = CommonRoadFileReader(f'../data/{scenario_id}.xml').open()[0].dynamic_obstacles[0].obstacle_id  # first obstacle is considered ego

# dynamically generated yaml config in mem
with tempfile.NamedTemporaryFile('w', suffix='.yaml') as f:
    f.write(f'vehicle:\n  ego_id: {ego_id}')
    f.flush()
    
    config = CriMeConfiguration.load(f.name, scenario_id)
    config.general.path_scenarios = '../data/'
    config.debug.save_plots = False
    config.update()
    config.print_configuration_summary()

timesteps: int = config.scenario.obstacle_by_id(config.vehicle.ego_id).prediction.final_time_step

crime_interface = CriMeInterface(config)
crime_interface.evaluate_scenario([TTC, WTTC, PET, ALongReq, ALatReq, LongJ, LatJ], time_end=147, verbose=False)

Yields:

ValueError: <CurvilinearCoordinateSystem/convertToCurvilinearCoordsAndGetSegmentIdx> Coordinate outside of projection domain.

Dear Ferdinand,

thanks a lot. Yes, you are absolutely right, beforehand it was not computed very precisely. I now updated the computation of in_same_lanelet by considering the predecessors and successors of the occupying lanelet, see the latest fix-clcs-error branch, which would be merged to develop branch soon.

The error is now also handled by a try-catch block. Regarding your measures, here are the updated results:

Thank you again for pointing out these issues. Your insights greatly contribute to enhancing the robustness of our CriMe evaluation.

Best regards,
Yuanfei

1 Like

Thanks for fixing this! Looking forward to give it a shot on my own, custom scenarios as well. However, I don’t see no fix-clcs-error branch (here).

sure :wink: sorry I pushed to the internal repo. Now it is merged into the develop branch, which you should be able to see. Looking forward to your results!

1 Like

Sorry to bother you again, but another hurdle that I came across is this. When computing metrics between two vehicles (ego and other), while the other exits the scenario early (other.final_time_step < ego.final_timestep), the whole program crashes with the following error:

File ~/dev/atks/deeptest/venv3.9/lib/python3.9/site-packages/commonroad_crime/utility/solver.py:60, in solver_wttc(veh_1, veh_2, time_step, a_max)
     56     r_v2, _ = compute_disc_radius_and_distance(
     57         veh_2.obstacle_shape.length, veh_2.obstacle_shape.width
     58     )
     59 x_10, y_10 = veh_1.state_at_time(time_step).position
---> 60 x_20, y_20 = veh_2.state_at_time(time_step).position
     61 v_1x0, v_1y0 = (
     62     veh_1.state_at_time(time_step).velocity,
     63     veh_1.state_at_time(time_step).velocity_y,
     64 )
     65 v_2x0, v_2y0 = (
     66     veh_2.state_at_time(time_step).velocity,
     67     veh_2.state_at_time(time_step).velocity_y,
     68 )

AttributeError: 'NoneType' object has no attribute 'position'

It would be cool if such case could be handled “gracefully”. What do you think?

Dear Ferdinand,

Thank you very much for bringing this to our attention! Indeed, prior to your observation, the handling of missing states for both the ego vehicle and other vehicles was not adequately addressed. I’ve implemented a function for each measure that checks the attributes of the involved vehicles before the evaluation begins. You can find the update through this link. It took me some time to update all the measures, but you should now be able to see the changes in the develop branch.

I hope this resolves the issue effectively.

Best regards,
Yuanfei

1 Like

Thank you so much! The only problem I have left now is the fact that CriMe seems to be super memory hungry. For a scenario with 95 dynamic obstacles and 399 timesteps, computing a bunch of criticality scores fills up my entire 32 GB of RAM after only ~ 130 timesteps before it crashes. Perhaps something is leaking memory somewhere?

Also, it seems like once there exists an obstacle for a given timestep, that doesn’t actually have a state for that timestep, the whole metric will become nan. I ran a scenario that features the above case (opponent vehicles “exiting” the scene before the ego or “entering” the scene after the ego) and all metrics ended up as nan for all timesteps. Instead, I think obstacles, that don’t exist at a certain timestep, should only be excluded from the computation of that timestep instead of rendering the whole metric nan.

Thank you very much for mentioning this. I will take a look at it! In the meantime, I suggest you evaluate a few measures at a time. Will keep you updated when we find the causes of the issue.

1 Like

For the second issue, your perspective also makes sense. The rationale behind setting the value to NaN is to clearly indicate that an entry is missing during the visualization of the measure. Subsequently, one can employ postprocessing techniques to filter out these NaN values, especially if warnings are provided by us already. What do you think?

Subsequently, one can employ postprocessing techniques to filter out these NaN values, especially if warnings are provided by us already.

Not exactly sure how a postprocessing on the basis of the warnings could help here. The evaluate_scene() will always iterate through all obstacles (which makes sense, of course), so I don’t see a way to handle the error of a single one. Take the following example:

  • Ego (ID 1) has 0 <= t <= 5
  • Agent A (ID 2) has 0 <= t <= 5 as well
  • Agent B (ID 3) has 3 <= t <= 5

Running evaluate_scene() will still yield nan for all t < 3 (because Agent B only enters the scene at t=3), even though you could well compute the criticality for the scenes at t ∈ [0, 1, 2] that only consist of Ego and Agent A. Or am I making a mistake here?

Thank you so much for providing the example!! You are absolutely right; we didn’t cover this in our unit tests. I have now modified the compute_criticality function to compute the criticality across all obstacles using the following function, which handles the NaNs in the criticality lists (but not merged to the develop branch yet):

 if len([c for c in criti_list if c is not None]) > 0:
                if np.all(np.isnan(criti_list)):
                    utils_log.print_and_log_warning(
                        logger,
                        f"* Due to the missing entries, all elements are NaN, "
                        f"the result for time step {time_step} is NaN",
                    )
                    return np.nan
                # Not all elements are NaN, return the max/min of the non-NaN values
                if self.monotone == TypeMonotone.POS:
                    criti = np.nanmax(criti_list)
                else:
                    criti = np.nanmin(criti_list)

Do you think this approach is more sensible, or would it be better to exclude obstacles that do not exist in the scenario? I felt that utilizing NaNs makes the handling more universal and also accommodates scenarios where the ego vehicle might not exist in certain time steps. I’m eager to hear your feedback!

best,
Yuanfei

1 Like

By the way, regarding the memory issues, I presume you are using the P_MC measure in your evaluation? If that’s the case, it might be due to all the simulated vehicle states being stored during the evaluation for subsequent visualization. I would address this issue along with the previous one to the develop branch.