Zhipu AI brings GPT-4o-like model to China with “video call” integration
1 month ago
Zhipu AI is putting its foot on the gas in the race for multimodal artificial intelligence supremacy. On July 26, 2024, it launched Zhipu Qingying, a video generation model akin to Sora. While Sora remains inaccessible months after its release, Qingying was made available to the public for free from day one.
A month later, on August 29, Zhipu made a splash at the International Conference on Knowledge Discovery and Data Mining (KDD), debuting “Her,” a GPT-4o-like model. This was featured in the consumer-facing product Zhipu Qingyan, which introduced a new “video call” function—bringing AI one step closer to human-like communication.
Qingyan stays updated on trends, too. After checking out the viral game Black Myth: Wukong, it quickly understood the content and could chat with users about it.
Alongside these updates, Zhipu rolled out a new multimodal model suite, featuring the visual model GLM-4V-Plus, which can understand both videos and web pages, and the text-to-image model CogView-3-Plus.
The base language model GLM has also been upgraded to GLM-4-Plus, a model capable of handling long texts and solving complex math problems with ease.
GPT-4o: Homework helper, tutor, and kitchen assistantPreviously, GPT-4o wowed users with its emotion-predicting abilities. But Qingyan takes a more straightforward approach, reminding users that, as an AI, it can’t express emotions.
That said, Qingyan’s video call feature opens up practical applications tailored to China’s focus on lifelong learning.
For example, it can serve as a personal English tutor. With the camera on, users can learn on demand, anytime, anywhere. Qingyan also doubles as a math teacher—its explanations rival those of real-life tutors. Parents can finally take a breather from homework stress.
At home, Qingyan acts as a personal assistant, too. It can recognize a Luckin Coffee bag and provide a brief history of the brand. Though sometimes, it veers off course—like when it explained how to store the bag instead of the coffee inside.
Though video call histories can’t be saved yet, using Qingyan feels like having a tutor, homework helper, and kitchen assistant rolled into one.
New visual model: From video understanding to code interpretationAt KDD, Zhipu AI unveiled its updated model suite, including a new generation of its base language model and an enhanced multimodal family: GLM-4V-Plus and CogView-3-Plus.
What’s notable about GLM-4-Plus is that it was trained using high-quality synthetic data. This has proven that AI-generated data can be highly effective for model training, reducing costs. According to Zhipu AI, GLM-4-Plus’s language understanding rivals GPT-4o and Llama3.1-405B.
In terms of long-text capabilities, GLM-4-Plus performs on par with GPT-4o and Claude 3.5 Sonnet. On the InfiniteBench test suite, created by Liu Zhiyuan’s team at Tsinghua University, GLM-4-Plus even slightly outperformed these leading models.
Moreover, by adopting proximal policy optimization (PPO)—a method that enhances decision-making in complex tasks—GLM-4-Plus has significantly boosted its data and code inference abilities and better aligns with human preferences.
The processing cost for 1 million tokens with GLM-4-Plus is RMB 50 (USD 7), comparable to Baidu’s latest large model, Ernie 4.0 Turbo, which costs RMB 30 (USD 4.2) for input and RMB 60 (USD 8.4) for output per million tokens.
But what’s truly groundbreaking is its multimodal capability.
GLM-4V-Plus, the new visual model, now understands videos and web pages—significant improvements over its predecessor.
For instance, uploading a screenshot of Zhipu AI’s homepage allows GLM-4V-Plus to instantly convert it into HTML code, helping users quickly recreate a website.
Unlike typical video comprehension models, GLM-4V-Plus not only understands complex videos but also has a sense of time. You can ask it about specific moments in a video, and it can identify the exact content. However, as of this writing, Zhipu AI’s open platform doesn’t yet support video uploads for this feature.
Despite its impressive visual capabilities, GLM-4V-Plus lags in multi-turn dialogues and text understanding, meaning it’s not yet on par with GPT-4o in this regard.
At KDD, Zhipu AI also introduced CogView-3-Plus, the next generation of its text-to-image model. Compared to FLUX, the current frontrunner in the field, CogView-3-Plus holds its own in generating images within 20 seconds.
CogView-3-Plus also supports image editing, such as changing object colors or replacing items in an image.
It took Zhipu AI over seven months to add the “Plus” suffix to models launched in January 2024—its longest development cycle since 2023.
What’s clear is that GPT-4o represents a pivotal moment for AI companies. As multimodal capabilities merge, the “black box” of language understanding is beginning to open—only to be quickly resealed by GPT-4o.
Most Chinese AI companies are adopting a divide-and-conquer strategy, first enhancing single-modal capabilities before tackling integration challenges. Zhipu AI is still in this phase, but the launch of its video call feature hints at the early stages of multimodal fusion.
KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Zhou Xinyu for 36Kr.
...Read the fullstory
It's better on the More. News app
✅ It’s fast
✅ It’s easy to use
✅ It’s free