
SEOUL, April 08 (AJP) - Maxwell Zhou, CEO of Chinese autonomous driving company DeeprRoute.ai, has showcased the company’s end-to-end AI-powered technology during his keynote speech at the Seoul Mobility Show held at KINTEX northwest of Seoul on Tuesday, saying the firm’s Vision-Language-Action (VLA) model has ushered in a new era of scalability, transparency, and human-level decision-making in smart driving systems.
“This is not just a technological shift,” Zhou said during his keynote speech during the Seoul Mobility Show's forum session, adding: “This is a revolution for robotics and mobility.”
Founded in 2019 and headquartered in Shenzhen, DeeprRoute.ai has quickly emerged as a key player in the smart mobility space in China, backed by more than $500 million in funding from investors including Alibaba Group and Great Wall Motor. Zhou noted that DeeprRoute.ai has already deployed its system across more than 40,000 vehicles and is on track to integrate the technology into over 10 new vehicle models this year.
At the heart of DeeprRoute’s innovation is the VLA model, which Zhou described as a “generalized AI system” capable of understanding long-term driving contexts and delivering step-by-step, transparent decision-making. The system fuses visual inputs, textual prompts, and navigation data into one cohesive behavioral output, guiding vehicles not only with precision but also with an explanation of why each decision was made.
“In traditional systems, decisions are made silently,” Zhou said. “But with our VLA model, the car tells you why it slows down, why it yields to pedestrians, or why it chooses a particular lane. It’s like having a thinking co-pilot.”
This transparency, according to Zhou, is key to gaining customer trust -- a challenge that has long plagued autonomous driving developers. During a follow-up interview, he emphasized that “our cars drive like humans, fast and safe, giving customers the confidence to trust and adopt the product.”
Zhou also downplayed the industry’s ongoing debate between LiDAR and camera-based perception systems, stating that hardware choices are secondary to the architecture of the AI model itself.
“The most important part is not whether you use LiDAR or cameras, but the intelligence of your AI system,” Zhou told AJP. “LiDAR might give you better perception in some rare cases, but for behavioral-level decisions, it brings no significant advantage. What matters is the quality of your architecture.”
Zhou highlighted that many competitors are still struggling to build their first end-to-end system, while DeeprRoute.ai has already moved into its second generation. “There’s at least a one-year gap between our technology and the rest,” he said. “The problem we’re solving is completely different.”
In addition to China, DeeprRoute.ai has begun testing its vehicles in Germany and is preparing for wider global deployment. The company’s ultimate goal, Zhou said, is to build artificial general intelligence (AGI) for robots—technology that can operate not only cars but any moving object.
“Smart driving should not be a premium feature,” he said. “It should be scalable, affordable, and available to all vehicle classes, from luxury to economy models.” Zhou ended his keynote with a vision for the future: a world where AI-native systems continually evolve through real-world data and deliver smarter, safer mobility with every mile. "We are not just building technology—we are building trust, intelligence, and productivity for the next generation of mobility,” Zhou emphasized.
Copyright ⓒ Aju Press All rights reserved.