REQUEST DEMO

AgentOS Empowers
Smarter Robot Experiences

AgentOS
Empowers Smarter Robot Experiences

Make Robots

Smarter

Human-like thinking, interaction, and response

Make Secondary Development

Faster

From 2 weeks to just 2 days

Make Robots Smarter

Human-like thinking, interaction, and response

装饰图

Smarter Thinking, Smoother Speaking

-

Previous Generation Robots

Require wake-up.
Dialogue is rigid and easily interrupted.

Voice Robots with AgentOS

No wake-up needed.
Conversations flow naturally,
without interruption.

播放按钮

Multilingual, Seamless Communication

-

Previous Generation Robots

No language switching

Voice Robots with AgentOS

Switch freely between 10 languages.

播放按钮

Smarter with AI

-

Previous Generation Robots

Traditional NLP
Difficult and complex to configure.

Voice Robots with AgentOS

Accurate intent recognition
Faster, smarter responses

播放按钮

Easy to Deploy, Smarter to Understand

-

Previous Generation Robots

Rely on manual Q&A setup
Configuration-heavy and inflexible

Voice Robots with AgentOS

One-click enterprise knowledge import
Easy setup, fast deployment

播放按钮

Make Secondary Development Faster

From 2 weeks to just 2 days

Advanced Scenario Development Capabilities

Advanced Scenario Development Capabilities
370+
API
20+
Voice Interaction
Enhancement Items

Built-in LLMs Capability Interface

Built-in LLMs Capability Interface
10lines of code
call instantly
0
integration cost

Stay Ahead in the AI Programming Era

Stay Ahead in the AI Programming Era
Compatible with mainstream AI coding tools like Cursor, significantly lowering the development barrier.
Compatible with mainstream AI coding tools like Cursor, significantly lowering the development barrier.

Android App Migration in Seconds

Android App Migration in Seconds
No rework needed. Android engineers can get started instantly.
No rework needed. Android engineers can get started instantly.

AgentOS

Designed for Diverse Scenarios

Designed for Diverse Scenarios

Guiding · Reception · Companionship · Elderly Care

· Guiding
· reception
· Companionship
· Elderly care

Robot Receptionist at Beijing Art Center

Beijing Art Center—one of the three major cultural landmarks in Beijing's sub-center, with 280+ performances annually and 190,000 audience members. The 125,000 square meter super-large space has complex circulation, and audiences easily get lost, urgently needing more considerate on-site guidance. Staff need to handle thousands of repetitive inquiries daily, working under high-intensity conditions for long periods, urgently needing more intelligent consultation assistance. Before each performance, multiple staff need to be arranged for circulation guidance and curtain calls, causing waste of manpower, urgently needing more efficient curtain call methods.OrionStar intelligent reception robots can relieve consultation pressure at service desks, meet audience demands promptly, make curtain calls warm, and after going on duty, audience satisfaction increased to 98%

11 hours
Daily working hours
2600+ times
Daily voice interactions
27 times
Daily guidance and directions

Client Testimonial

The reception robot now handles guest reminders and send-offs. Our team can finally focus on deeper, more meaningful services like art tours.

客户头像
Beijing Art Center case video 播放按钮

FAQ

Which robots in the OrionStar robot family can upgrade to the AgentOS system?

Currently, the OrionStar GreetingBot Nova and OrionStar GreetingBot Mini in the voice interaction robot series can upgrade. If ecosystem partners have models they want to upgrade, they can tell OrionStar in the comments section~

How to achieve multi-language recognition and respond in corresponding languages? What voice package is integrated, and what technology is used? Can it recognize Cantonese, Hakka, Sichuan dialect, and respond in these languages?

OrionStar AgentOS integrates multi-language voice recognition technology, supporting over 10 mainstream languages including Chinese, English, Japanese, Korean, Thai, Spanish, French, Italian, German, etc. For dialects, Cantonese recognition and response are currently supported, but Hakka/Sichuan dialect are not yet supported, and will be gradually improved in the future.

In complex and noisy environments, how to accurately recognize and capture the main core person's voice and commands without being interfered by irrelevant people's chatter around, this point is very critical, can you answer in detail?

OrionStar AgentOS makes decisions through multimodal visual recognition, sound source distance and angle, and voiceprint recognition algorithms, combined with front-end noise reduction algorithms optimized based on OrionStar robot microphone arrays.

How to integrate third-party platforms such as knowledge base, ASR model, large model, etc.?

OrionStar AgentOS has built-in underlying large model capabilities. Since it is deeply integrated with the Agent Brain (main Agent) thinking process, it cannot be simply replaced. However, in Agent applications, developers can flexibly integrate third-party capabilities to replace system capabilities.

When third-party devices (height and weight measuring instruments) broadcast height and weight, why can't the robot recognize them?

Due to the wake-up-free mechanism restrictions (we filter non-human interactions), if there are special requirements, you can manually turn off the wake-up-free capability through the Agent secondary development API.

How does OrionStar AgentOS reduce secondary development from 2 weeks to 2 days?

1) Applied AI Agent development paradigm architecture, simplified many development processes for voice interaction scenarios, no longer need complex configuration of voice backends, some basic capability modules are also built into AgentOS, developers can focus more on their own business process logic.
2) Developed AgentOS SDK and deep adaptation with AI programming tools, AI-powered empowerment.

Original OPK/APP development required Android development or Android development environment, can Agent development be simplified now?

Due to the underlying dependencies of the robot system, basic Android knowledge is needed, but with the assistance of AI programming tools and our AgentOS SDK optimization for AI programming tools, this process will become increasingly simple. For example: In our internal hackathon activities, colleagues who completely don't understand code can complete the development of a new Agent voice interaction application based on AgentOS SDK in a few days with AI assistance.

For enterprise front desk scenarios, robot door access requirements, is it global dialogue or need to enter a specific application function to execute door access dialogue?

It can be defined as a global intent, or as an intent within a specific business scenario.

During the process of answering questions, do you need to target fixed scenarios for door access functions, or can users reply freely?

This function can be completed through natural language constraints, such as emphasizing and specifying when defining Action functions and parameter descriptions. If specified very clearly, if someone says ID card, OrionStar AgentOS will judge semantically that it's not a phone number and ask the user to provide a phone number instead of an ID card number.

If Agent thinking answers incorrectly, can custom orchestration processes be defined?

The following methods can intervene in the Agent's thinking process:
1) You can define and orchestrate through natural language in application development, continuously optimize your Prompt, guide AgentOS on what logic and steps to think with.
2) You can also provide some examples in the application for Agent to reference.
3) Also supports global-level Action strong intervention strategies, such as for a certain sentence, you want it to do a deterministic thing.

Can it make phone calls? Call duty personnel's mobile phones?

This requirement can be implemented in secondary development Agent applications.

How to make robots take videos during patrol and transmit to server, any good solutions?

Agent applications implement themselves, belongs to basic Android capabilities.

What are the main differences and advantages of OrionStar AgentOS compared to platforms like Coze, Dify, etc.?

1) OrionStar AgentOS can be deeply integrated with robot business application interaction interface logic. Coze/Dify cannot achieve this.
2) AgentOS is the running environment for OrionStar service robot Agent applications, supporting Agent operation and scheduling, while Coze and Dify are workflow building platforms.