Cloopen AI Links DeepSeek, Launches Multi-Enterprise Business Verification
The six modules of Cloopen AI Cloud cover core business scenarios: marketing, sales, service, and enterprise management.
At the AI Co-evolution 2025 Century Gravity X New Trend in the Technology Industry conference, Kong Miao, the vice president of Cloopen AI and the founde
At the "AI Co-evolution 2025 Century Gravity X New Trend in the Technology Industry" conference, Kong Miao, the vice president of Cloopen AI and the founder of Zhuge Intelligent, as an industry representative, was invited to participate in the "AI Agent" roundtable forum. Together with leading enterprises, they explored breakthroughs in intelligent agent technology and commercialization paths, attracting great attention from the industry.
"Now we are starting to integrate the old and new software systems, leveraging the capabilities of computer use or browser use, to perform scheduling based on large models, thereby building a more advanced Agent system. This is also an ongoing upgrade in our understanding of Agents."
01What exactly is an AI Agent?
From two perspectives, we began to encounter and experiment with the concept of AI Agent in 2023. On one hand, we observed how the industry defined Agents. On the other hand, we ourselves tried to understand and apply it.
Looking back at 2023, when ChatGPT was just emerging, the concept of Agent began to attract attention. The earliest products, especially the so-called "Four Dragons" companies, had already started to introduce the concept of "intelligent agent" into their products. They regarded the specific role constructed by a System Prompt - such as the persona of a financial engineer - as an Agent. Then came the second stage, where some open-source workflow orchestration of the flow type began to emerge. Some people began to think that "orchestration is the Agent". By the beginning of 2024, a complete Agent was considered to possess several core capabilities: long-term memory, short-term memory, task planning, tool invocation capabilities, and result output capabilities. After Devin came out, this ability of agent automation programming. And this year, the browser use ability of Manus. So from the perspective of the entire development of the Agent industry, it actually went through different stages of being defined as "agent" or "intelligent agent". Looking through it, some people think that one feature of large language models like LLM is that a certain persona in a specific scenario is an Agent. Some people think that workflow orchestration is an Agent. Of course, there are also many developers at the forefront, such as Manus, who have truly achieved the ability to let Agents replace humans to use tools to complete tasks. Cloopen AI started exploring Agents earlier and our understanding has been constantly updated during the process. In the application practices of serving large model customers, we have summarized two main application scenarios: The first is the assistant agent application, such as in the workflow of customer service or sales agents, providing prompt scripts, process guidance, or knowledge retrieval support. We think this is closer to Copilot because it plays an auxiliary role in the existing process. The second is the autonomous execution type application, such as script mining, script insight, and even knowledge mining tasks. This is what we consider a true Agent. The reason is that we will define the goal for it, and through LLM, we can automatically understand historical communication records and conversation records, and call some basic tools - such as function calls, code calls, or business APIs - to mine effective scripts and key information and results in the service, achieving end-to-end automatic processing. But this year, we also found that the simple end-to-end approach was not perfect. Based on the industry's leading Agent standards - making decisions by LLM, combining tool use for tool invocation, and designing scenarios based on business - if we look at it from this dimension, we need to think in a higher dimension. In the past, our Agents were more often replacing old software with new software. Now we are beginning to integrate new and old software systems, leveraging computer use or browser use capabilities, to make scheduling based on the large model, thereby building a more advanced Agent system. This is also our continuous upgrading of the understanding of Agents.
"Our application scenarios inherently have certain vertical characteristics. When implemented in specific industries, we will also conduct targeted functional encapsulation and application adaptation based on the industry's unique features."
02How should General and Vertical make their choices?
What we have done is relatively focused on vertical aspects.
From the perspective of business scenarios, for example, when we are implementing functions such as assistance, insight, and quality inspection, although they have certain universality in the customer service field, in the internal product processes, such as quality inspection items, script mining, and script assistants, these modules are further focused on specialization and belong to "vertical plus vertical".
Overall, our application scenarios inherently have certain vertical attributes, and when implementing them in specific industries, we will also combine industry characteristics to carry out targeted function packaging and application adaptation.
"Our large-scale model and Agent capabilities are mainly applied to all aspects of the customer service scenarios, truly achieving an upgrade from efficiency improvement to value extraction."
03From the perspective of the actual business operations of the enterprise, what are the typical pain points of your business, and how does AI solve these pain points?
Our current large model applications mainly focus on the entire process of sales and services.
When it comes to large models, many people first think of their disruption to intelligent customer service. After all, nowadays "customer service" is often referred to as "intelligent customer service". However, in fact, the application effect of traditional intelligent customer service in medium-sized and large enterprises is still relatively limited.
The past task-based systems based on small models performed well in specific scenarios, but due to the fact that communication is inherently a long-tail and diverse form of information, different users may express the same problem in very different ways. If only relying on small models to exhaust all expression methods, it is almost impossible. This also leads to two core pain points:
The first is high knowledge operation costs: it requires dedicated personnel to organize QA knowledge bases, which is long-term and inefficient; the second is model optimization relying on trainers: it affects the product's implementation and commercial feasibility.
Therefore, the intelligent customer service based on small models in the past has never truly emerged as a major industry player.
The emergence of large models has well solved the problem of language generalization. As a Foundation Model, it provides strong support for understanding diverse languages. On this basis, through lightweight small models to complete specific tasks such as intent recognition, this constitutes the core logic of our current intelligent customer service.
From the business perspective, the application scenarios of intelligent customer service are very extensive. Enterprises generally adopt the "robot first + human backup" model: first, the online or voice customer service is handled by the robot, and when it cannot be solved, it is transferred to human. This is a typical intelligent customer service process.
In the entire process, the agents usually go through multiple stages: such as pre-training, knowledge preparation; real-time assistance, intelligent form filling, business management; post-quality inspection, report monitoring, etc. In these stages, many problems that were originally limited by insufficient generalization ability and poor handling of multi-round conversations have now been effectively improved through large models. For example:
During the call, the robot can better understand interruptions and context shifts; the online customer service can also more accurately grasp the user's intention and no longer "answer irrelevantly". In terms of quality inspection, traditionally, it relied on regular expressions or small model configuration rules, which were not only cumbersome but also had limited effectiveness. Now, we only need to define "question items" using natural language, and the large model can automatically detect abnormal behaviors, significantly improving the accuracy and recall rate.
The same is true in the aspect of dialogue insight. In the past, customer communication records were massive and complex, and the efficiency of manual listening to the recordings was extremely low. Moreover, traditional NLP methods were difficult to deeply explore. Now, we only need to hand over these data to the large model or Agent, and we can extract potential needs, business opportunity clues, service breakpoints, and even reverse drive the construction of intelligent processes. This transformation from "observing indicators" to "reading content" is a major leap in dialogue insight.
The capabilities of our large models and Agents are mainly applied in all aspects of the customer service scenario, truly achieving an upgrade from efficiency improvement to value mining.
“From the perspective of tool application, it is currently divided into two stages: the first stage mainly involves engineering methods, and the second stage is now adopting the popular MCP new technology methods. Both of these dimensions fall under the category of technical concepts and are characterized by rapid technological advancement.”
04During the process of integrating Agents with business operations, what was the biggest challenge your enterprise faced? And how was it overcome?
From the perspective of implementing Agent at the enterprise level, the main difficulties were decision-making ability and tool invocation capability.
From the perspective of decision-making ability, it actually relies on model capabilities. When it comes to enterprise-level applications, there is a lot of enterprise-level internal domain knowledge, so the data engineering and knowledge engineering of the enterprise have not kept up with the rapid technological development and concepts. If the construction is not adequate, it will not be done well and will not be so precise when planning tasks.
From the perspective of tool application, it is actually divided into two stages. The previous stage mainly involved engineering methods, but to use it well, it requires product technology research and development to be very familiar with the business. This is a bottleneck hindrance. The second is that the currently popular MCP new technology means rely on the combination of ecosystem construction and existing software tools. Therefore, these two dimensions belong to technical concepts and technologies that are running very fast, but the enterprise is still in the construction process. So, in the end, it gives the impression that everything can be done, but the effect is not so good when actually doing it.
"For the upstream models, we can adapt them quite well.
Some large model manufacturers although have technical capabilities, lack specific business implementation solutions. Therefore, we will act as their downstream partners and combine the products with their models to jointly create industry-oriented solutions."
05At present, many internet giants, including some major companies, are building their own AI ecosystems. Which approach do enterprises prefer - building an independent Agent closed loop or joining an ecosystem platform? What is the core strategic point in the core game of ecosystem cooperation?
Currently, cloud providers and large model manufacturers are in the upstream ecosystem position. However, from our perspective of cooperation, it is different.
We mainly perform fine-tuning and local deployment for large models. For models like DeepSeek that support open source, or Model Service like Qianwan that provides excellent developer services and allows for model fine-tuning, we can adapt well.
At the same time, although some large model manufacturers have technical capabilities, they lack specific business implementation plans. Therefore, we will act as their downstream partners and combine our products with their models to jointly create industry-oriented solutions. This is our open attitude towards the ecosystem and the way we implement it.
In terms of Agent tool invocation, the enterprise-level ecosystem is still in its early stage and is not yet fully mature. In the future, we will actively integrate into this ecosystem to promote the development and collaboration of tool invocation capabilities.
The six modules of Cloopen AI Cloud cover core business scenarios: marketing, sales, service, and enterprise management.
Based on the semantic understanding and reasoning capabilities of DeepSeek-R1, it enables in-depth exploration of customer needs and precise...
Unlock life insurance revenue. Conversation insight via AI-powered data analytics cuts complaints, boosts service. Optimize policy management now!
Be the first to know about new B2B SaaS Marketing insights to build or refine your marketing function with the tools and knowledge of today’s industry.