{"id":327,"date":"2024-08-21T01:31:15","date_gmt":"2024-08-21T01:31:15","guid":{"rendered":"https:\/\/neuromorphicrobotics.com\/?p=327"},"modified":"2025-06-30T09:17:39","modified_gmt":"2025-06-30T09:17:39","slug":"how-to-implement-low-latency-automotive-vision-with-event-cameras","status":"publish","type":"post","link":"https:\/\/braininspiredrobotics.com\/?p=327","title":{"rendered":"How to implement low-latency automotive vision with event cameras?"},"content":{"rendered":"<p style=\"text-align: justify;\">Gehrig, D., Scaramuzza, D. <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07409-w\"><strong>Low-latency automotive vision with event cameras<\/strong><\/a>. Nature 629, 1034\u20131040 (2024). https:\/\/doi.org\/10.1038\/s41586-024-07409-w<\/p>\n<p style=\"text-align: justify;\">Abstract<br \/>\n&#8220;The computer vision algorithms used currently in advanced driver assistance systems rely on<strong><span style=\"color: #ff0000;\"> image-based RGB cameras, leading to a critical bandwidth\u2013latency trade-off for delivering safe driving experiences<\/span><\/strong>. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements1. Despite these advantages, <strong><span style=\"color: #ff0000;\">event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results<\/span><\/strong>. To overcome this, here <strong><span style=\"color: #ff0000;\">we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off<\/span><\/strong>. Our method exploits the <strong><span style=\"color: #ff0000;\">high temporal resolution and sparsity of events<\/span><\/strong> and the rich but <strong><span style=\"color: #ff0000;\">low temporal resolution information<\/span><\/strong> in standard images to generate <strong><span style=\"color: #ff0000;\">efficient, high-rate object detections<\/span><\/strong>, <strong><span style=\"color: #ff0000;\">reducing perceptual and computational latency<\/span><\/strong>. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a <strong><span style=\"color: #ff0000;\">5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy<\/span><\/strong>. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras2.&#8221;<\/p>\n<p style=\"text-align: justify;\">Gehrig, D., Scaramuzza, D. <a href=\"https:\/\/www.nature.com\/articles\/s41586-024-07409-w\"><strong>Low-latency automotive vision with event cameras<\/strong><\/a>. Nature 629, 1034\u20131040 (2024). https:\/\/doi.org\/10.1038\/s41586-024-07409-w<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Gehrig, D., Scaramuzza, D. Low-latency automotive vision with event cameras. Nature 629, 1034\u20131040 (2024). https:\/\/doi.org\/10.1038\/s41586-024-07409-w Abstract &#8220;The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth\u2013latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[92,91,93],"class_list":["post-327","post","type-post","status-publish","format-standard","hentry","category-neuromorphic-sensing","tag-automotive-vision","tag-event-camera","tag-low-latency"],"_links":{"self":[{"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=\/wp\/v2\/posts\/327","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=327"}],"version-history":[{"count":1,"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=\/wp\/v2\/posts\/327\/revisions"}],"predecessor-version":[{"id":328,"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=\/wp\/v2\/posts\/327\/revisions\/328"}],"wp:attachment":[{"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=327"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=327"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/braininspiredrobotics.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=327"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}