Naver describes its work on driverless cars

Home > Business > Industry

print dictionary print

Naver describes its work on driverless cars

테스트

Naver Labs’ autonomous vehicle

Naver, the internet portal that recently jumped on the autonomous vehicle bandwagon, made public how far it has come with its auto-pilot technology at the Seoul Motor Show Thursday.

“Naver Labs is speeding up its research and development effort in autonomous driving through vision and deep-learning technologies,” said Song Chang-hyun, CTO of Naver and chief executive of Naver Labs, during a press conference at the motor show. “And the purpose of our research in autonomous driving is the improvement of future mobility and traffic systems as well as the ‘informatization’ of traffic conditions, necessary tools for ambient intelligence that Labs ultimately aims to create.”

Song explained that ambient intelligence understands a user’s need and make anticipatory moves to satisfy those needs, creating what he called a “natural user experience.”

“It is an interface that doesn’t require users to learn what the system can do,” he added.

Naver Labs is the first information technology company in Korea to land a license to operate autonomous vehicles for experiment and research purposes. It is also the first tech firm to officially join the Seoul Motor Show.

“We are continuously trying to get to SAE (Society of Automotive Engineers) autonomous level 4,” Song explained. At the moment, Naver Labs is at level 3, which means, based on guidelines by U.S. Department of Transportation’s National Highway Traffic Safety Administration, that a driver can safely turn away from driving-related tasks but he or she must be ready to take the wheel when necessary. At level 4, the vehicle can operate fully on its own on familiar roads. At level 5, it can perform as well as human drivers in every scenario.

Naver Labs’ auto-pilot system obtains raw data through various channels, including light detection and ranging (lidar), radar, cameras and GPS. A GPS recognizes the position of the vehicle through signals from satellites. The module’s radar detects any obstacles or vehicles in front by sending electromagnetic wave forward. It also senses the speed of the vehicle. A camera on the module is used to measure the distance between the vehicle and a vehicle or obstacle spotted by the radar while simultaneously recognizing traffic signals and the lane the vehicle is on. Finally, a lidar system, which is located at the very top of module, scans for obstacles in all directions by shooting out laser beams. The laser beam’s travel time from the module to the object and back is used to calculate the distance between the car and the object.

“It’s a full spectrum recognition system utilizing deep learning-based object detection,” Song told the reporters.

“All the technologies here are related to space,” he explained.

As for the deep learning technology that the tech company puts heavy emphasis on, Song explained that it increases the accuracy of space recognition such as blind spot detection by slashing the time necessary to create various algorithms.

테스트

Naver Labs’ Robot M1 [YONHAP]

Naver Labs also unveiled Robot M1, a fully automated indoor mapping robot capable of conducting scalable 3-D image processing. “M1 can be thrown into an indoor space or any other space in which GPS doesn’t work and create a 3-D map,” said Song. “It can digitalize the indoor space and can be used as a key platform for many space-based services such as real estate.”


BY CHOI HYUNG-JO [choi.hyungjo@joongang.co.kr]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)