Crunching the Numbers
In a paper you co-authored on evaluating methods for identifying hot spots, you point out the “alarming” practice by safety agencies of using accident rates to rank hot spots. Intuitively, that would seem to be an acceptable method, but it performs very poorly in identifying them. Could you explain?
There is an ongoing misperception that one can use accident rates to level the playing field with respect to exposure when identifying high risk locations. This is marginally true at high levels of aggregation but increasingly less true as one begins to examine particular types of sites. The problem with using crash rates is that they typically decrease once exposure reaches a certain threshold. In other words, as traffic volumes increase over time with growth of VMT, accident rates generally tend to improve. So, comparing two otherwise similar sites with differing VMT often does not serve as a meaningful metric to gauge their relative safety.
Another critical aspect of the relationship between safety and exposure is the changing crash severity distribution as VMT increases—this is true on road segments and at intersections. Clearly a fatal crash is more harmful to society than an injury crash which in turn is more harmful than a property damage only [PDO] crash. A research interest of mine is to improve and standardize how we incorporate crash severity into high-risk site identification.
From an interview with Simon Washington, the new head of Berkeley’s TSC. Well worth a read in its entirety.
This entry was posted on Friday, July 3rd, 2009 at 4:21 am and is filed under Traffic safety. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.