AV camera technology going forward

June 30, 2022
With Level 3 autonomous vehicles just around the corner, manufacturers are moving away from expensive radar to more cameras and sensors.

Content brought to you by ABRN. To subscribe, click here.

What you will learn:

Human intervention will still be required with vehicle autonomy

• The stereophonic and monocular lenses, like in old cameras, are still being used in today's ADAS systems

• Vehicle autonomy leads to faster decisions to be made by the ADAS systems processing than that of a human mind


My great-grandfather was a technician and shop owner on the east side of Detroit. Second to his love of all that rolled was his fondness for cameras. I believe – by the selection he left me – that he had about every type of 35mm to Instamatic to one of the first SX-70 Polaroid Land Cameras. He would have appreciated the digital age.  

I was always getting his optical hand-me-downs, which I would dissect for analysis and rebuild. I appreciated his previous top choice before the latest model became his favorite pet, which he enthusiastically shared. Back then, cameras weren’t cheap. And the 135mm film that told their story was equally expensive, sometimes requiring cost-cutting by either pushing or pulling the exposures. Either way, you planned the shot, framed the shot and focused.

As I got older and gained a bigger appreciation for optics, I purchased a telescope capable of photographing the heavens via special adapters. Again, it was choosing where you wanted to be in the universe and making sure those specks of light came out crystal clear. So, I find it curious that my interest in optics and automobiles has come full circle when it comes to ADAS.  

While proximity sensors, lidar, and radar can create digital images, the centuries-old reliable camera is still in the picture, now in stereophonic and monocular lens series. These types – multi-lens or single can give distance, depth, and overall pictures, like our photo in our phone app. Like a set of human eyes, one is fixed for perception reference and the other focuses on how far an object is from our reference point. The same holds true for vehicle camera lenses.  

Where technology and quality cross paths

There is a reason that any forward-facing camera (FFC) – situated within the interior of the vehicle – ADAS calibration/recalibration criteria calls for windshield quality verification. Yes, there are a number of issues identified with using what may be inferior parts. The calibration/recalibration is doomed before the scanner is connected when an aftermarket glass has taken the place of a known, good entity. (Check that corner for the manufacturer logo before going forward.)   

A colleague was telling me about an ADAS case study where a camera solved another camera’s untimely fault code. The vehicle was less than a year old. Suddenly, its FFC could not “see”; it set a code and disabled ADAS functionality. The factory windshield was still intact. There was no factory TSB regarding the FFC. After verifying the electrical connections and communications were present, a second opinion was called in.  

This over-air analysis – with a visual complement of the smartphone app  allowed the opinionator to see what the tech could not: distortion at the windshield level. The camera lens on the tech’s phone was able to pixelate the shot so the glass issue was visible on the receiving phone. The camera discovered the offending film of window cleaner from an overzealous detailer had slid under the insulation to collect in front of the camera’s line of sight. Clean. Recalibrate. One-and-done.  

More manufacturers are leaning toward the multi-camera mode and dropping the more expensive radar and choosing to see what’s in front of the vehicle versus a waveform combo. Even commercially manufactured Tesla automobiles (as of 2019) have dropped radar in favor of cameras. Why? It is all about the math, and I’m not talking about sensor timing. It is all about the bean-counters taking more time to add up the higher cost of radar. We’ll see what happens in the future. Sometimes cutting costs leads to higher expense down the road. Experimentation is the price we all pay for innovation.  

And take this refresher note, Tesla owners: the manufacturer wants you to clean – with approved product – every camera lens before getting into the vehicle and driving semi-autonomously. That’s a rule that is straight out of their manual. I wonder how many owners take the time to follow this cameras-cleaning rule to ensure safety?

Cameras: they mimic our eye’s ability to gather radiant energy, also known as photons. These little particles are scattered in different wavelengths and are indiscriminately diffracted, spread everywhere like peanut butter smeared on a slice of bread, with no method, rhyme, or reason to the blob on the wedge. That is, until we take a second, more concentrated look. Add magnification and focus: a lens combination to bring out the details of the crushed nuts. A common example of how vehicle cameras are designed by multiplexing the photons into a shape recognized, coding chooses the shape via post-machine learning. These are defined focal points.  

If you look closely enough, you can see that there are two lenses in the automotive camera, convex and concave. The convex accepts the concave lens’ light rays and creates a focal point. An advanced stereophonic camera not only pinpoints the target but sees the object three-dimensionally. Therefore, it is not only able to deduce size but velocity – speed and direction – of the item caught in the lens’ frame.  

Then there is my old friend, monovision. Having been born with this focus-grouping – one eye sees distance while the other focuses in on the close-up – this unique way of reading the roadbed and traffic sequence is being studied. Each camera lens has a restricted focal point. ADAS camera developers and manufacturers prefer this method as it is less expensive than radar and/or stereophonic cameras. The monovision concentrates on anything that produces a “shadow” and the lumens around the cast. In a monochrome setting, the lens can see the different depths of darkness – like a shadow produced by a vehicle – and calculates velocity.  

Ah! But what happens on a cloudy day or nighttime? Even as our eyes adapt to low-light situations, we are – unbeknownst to our conscious brain activity – looking for outlines and shadows to distinguish what is up ahead. These monovision cameras are quicker to calculate; therefore, recognize an object with a quicker reaction time. But what happens in a worst-case scenario?  

For example: You are driving at night in the Great Smoky National Forest cutting over from Gatlinburg to Townsend. You catch a glimpse of a black bear in the road about a half-mile ahead. Suddenly, the headlights dim then fade into the abyss. (Work with me here. Trying to make a point knowing autonomous vehicles do not need headlights.) Suddenly you – and your monovision vehicle  plunge into darkness, with no celestial lighting to cast shadows due to the tree canopy. What do you do?  Nothing.  

Your vehicle is equipped with night vision. Debuted 22 years ago on select General Motors (GM) Cadillacs, the archaic thermal imager has evolved into a combination of thermal and visible imagery for night vision cameras. These FLIR (forward looking infrared) devices are on every AV manufacturer’s sightline, promising low-no light fix for accident-free travel for going around that black bear on the roadway. And we know how effective thermal imagers are in bay applications. Whether it’s a handheld device or an app on our smartphone, the imager is picking up temperature variation – via waveform signature – and presents itself onto the display. In a vehicle, the display is the coding which the vehicle interprets the signal coming into the lens pack.  

Fiber optics support decision-making

We need something to support the optic advancements with a lightning-fast communication route. What could be faster than fiber optics? Automotive advancement influencer Mercedes-Benz was the first to incorporate this type of dispatch in its S-series vehicles, way back in the late 20th century. It is the "MOST" they could do in 1998.  

MOST (Media Oriented Systems Transport) standardization began as a dependable – and most robust – vehicle communication platform. Constructed of plastic optical fiber (POF), the transmission mode carries more data than physical wiring, is lighter in overall weight, and has high bandwidth with no interference from electromagnetic (EMI) components in or around the vehicle. About 99 percent of all makes and models rolling off the assembly line today have a touch or two of POF within their DNA.  

Flexray’s electrical and optical protocols were another advanced use of optoelectronics. Yet we need even faster speeds for both optical and electrical communication. Welcome Ethernet, at this point, a non-optical, copper-wire digital transport, with even quicker computations. But with fiber optics, information can have the ability to almost travel the speed of light. Is there anything quicker to make a vehicle maneuvering decision? There is, and it’s something that blends in with the human element still behind the wheel.  

The baud rate between what the camera sees to processing to reaction time is crucial, as in the case of the timing of the self-driving Uber-versus-pedestrian death a few years ago. The vehicle’s infrastructure noticed movement then was systematically going back-forth between modules deciding what was actually going on in the road ahead. It was only at the last moment – when it all came together – that sensor fusion identified the object as a pedestrian crossing the roadway. By then, it was too late. Granted, this manufacturer – Volvo – is traditionally equipped with the latest in communication venues, but it's not fast enough, triggering the need for human intervention well before the accident, whom, I may add, was too busy watching a movie on a mobile device, relying upon the vehicle to perform all functions. Should there be more machine learning needed before it’s okay to remove your hands from the steering wheel? More rules of the road for the “driver”? Critics say it’s both.  

How fast is too fast? Like everything, there is an optimal point – a sweet spot – where timing comes together with function. Our present-day automotive skill set relies upon human optics and human-machine learning, also known as muscle memory. I mention machine learning in the best sense, as automakers are beginning to program us – the driver – on this skillset to help fold into the AV architecture, reporting the pros and cons to us on our latest drive to the grocery store. If you did not realize, Nissan was the first to provide operating performance feedback to their drivers. The learning curve has begun.  

And we cannot forget to smile while driving. Cameras are watching the road – and us: From the dash, a coded mono-camera is focused on our facial expressions and head movements. Are you blinking too many times? Possibly, it’s time to take a rest. “Let me recommend accommodations at the next exit.” And the IP – under the speedometer – names hotels for you to consider. This is happening now.  

Level-3 autonomy – which some manufacturers want to skip over – is just around the corner. This will require more cameras – stereoscopic and monocular – along with other aids like lidar, proximity sensors and, I am sure we will see a resurgence of radar in the mix somewhere. Using both human and vehicle intervention, we will still need to make those “on the road” decisions in a blink of the eye going forward on the road to vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X).  

About the Author

Pam Oakes

Automotive SME Pam Oakes has been embedded within the automotive industry for 30 years as an automotive applications engineer, instructor/course developer (for several international companies), 609 instructor/test proctor, automotive business expert/strategist, 20-year original start-up shop owner/multiple auto business owner, ASE master automotive and medium/HD truck and collision technician-trainer with L4 ADAS, diesel Class 8 instructor,  ASE testing contributing panel member (L4/A7), automotive author, syndicated radio host, and automotive-consumer news media commentator. And she still “turns-wrenches” for fun. 

Sponsored Recommendations

Best Body Shop and the 360-Degree-Concept

Spanesi ‘360-Degree-Concept’ Enables Kansas Body Shop to Complete High-Quality Repairs

ADAS Applications: What They Are & What They Do

Learn how ADAS utilizes sensors such as radar, sonar, lidar and cameras to perceive the world around the vehicle, and either provide critical information to the driver or take...

Banking on Bigger Profits with a Heavy-Duty Truck Paint Booth

The addition of a heavy-duty paint booth for oversized trucks & vehicles can open the door to new or expanded service opportunities.

Boosting Your Shop's Bottom Line with an Extended Height Paint Booths

Discover how the investment in an extended-height paint booth is a game-changer for most collision shops with this Free Guide.