News

Autonomous cars using human-style behaviour to see, learn, find Sarah Connor…

Light Detection and Ranging (LiDAR) sensors are the current gold standard for autonomous vehicles’ technology – but what if there was a way to give self-driving cars ‘vision’?

Tom Fraser
03:03, 11 June 2021

Considering the goal is for autonomous cars to be able to perceive the environment with at least the same accuracy as the human brain, self-driving cars have their work cut out for themselves. They must be able to distinguish between other vehicles, road signs, pedestrians, obstacles and cyclists at various distances. Not to mention, the car itself must know how to avoid said obstacle.

But new research out of Cornell University has found that autonomous cars soon could get their very own set of ‘eyes’, by using two cameras connected by a neural network to piece the two feeds together.

The current practice is to use Light Detection and Ranging (LiDAR) sensors, which work by pulsating light and determining how long it takes for the beam to reflect off surrounding surfaces. But this is both costly and complicated.

According to Kilian Weinberger from Cornell University’s Computer Science division who is exploring the technology, neural network algorithms are key to making the two separate cameras piece together imagery to interpret surroundings.

Though this technology isn’t exactly new, it is the first time the process has been explained explicitly and is the most likely interpretation to have real-world applications.

It was achieved using machine learning, which essentially enables the artificial intelligence to learn from its experiences. Rather than providing code for the software to follow, machine learning uses examples of how to react in a given situation, which allows the software to notice patterns and react accordingly.

But not unlike 'learning machines' in science fiction, the artificial intelligence can become overly confident in its perceptions and decide that it’s always right. After being given examples to ‘view’ and not making mistakes, the system will assume it always understands a given situation correctly and doesn’t react.

Of course, this could have detrimental real-world outcomes, so Weinberger and his team are currently developing a ‘spectrum’ of accuracy, which is set to better depict just how well their system understands a given situation.

Sufficed to say, full self-driving and learning capability is still some time away.

Tom Fraser
Tom Fraser

Journalist

Tom started out in the automotive industry by exploiting his photographic skills but quickly learned that journalists got the better end of the deal. He began with CarAdvice in 2014, left in 2017 to join Bauer Media titles including Wheels and WhichCar and subsequently returned to CarAdvice in early 2021 during its transition to Drive. As part of the Drive content team, Tom covers automotive news, car reviews, advice, and holds a special interest in long-form feature stories. He understands that every car buyer is unique and has varying requirements when it comes to buying a new car, but equally, there's also a loyal subset of Drive audience that loves entertaining enthusiast content. Tom holds a deep respect for all things automotive no matter the model, priding himself on noticing the subtle things that make each car tick. Not a day goes by that he doesn't learn something new in an ever changing industry, which is then imparted to the Drive reader base. He's one of the lucky few who can say he loves his job and is a die-hard BMW fan – just ask him.

See all of Tom's articles
xmlns="http://www.w3.org/2000/svg" x="0" y="0" viewBox="0 0 783.42 408.74" xml:space="preserve" enable-background="new 0 0 783.42 408.74" width="147" height="80" class="fill-current">