
With the rapid penetration of AI into high-responsibility industries such as healthcare, government, and law, we are standing at a critical technological intersection: AI is no longer just an assistive tool, but is gradually being integrated into the system, participating in judgments, generating recommendations, and perhaps even influencing the final decision. Models like DeepSeek, ChatGPT, etc. are being integrated into all kinds of systems and have become a norm.
Tsinghua University launched the world’s first AI hospital Agent Hospital; Shenzhen Futian deployed AI civil servants; the legal industry has also widely introduced AI drafting instruments and retrieve regulations. When the efficiency is greatly improved, an increasingly urgent problem has also emerged:
AI can be involved in the judgment, but when the judgment is wrong, who is responsible?
Tsinghua AI Hospital: not a tool, but an “AI doctor”?
The “Agent Hospital” launched by Tsinghua University has completed more than 10,000 cases of simulated consultations through AI doctors and AI patients, with a diagnostic accuracy rate of 93.06%. Unlike traditional hospitals that use AI assistance, Agent Hospital realizes the deep embedding of intelligent agents from the architectural level.
Tsinghua said that AI is not a replacement for doctors, but a collaborative tool to ease the tension of primary healthcare resources and improve efficiency and quality. But what cannot be ignored is: once AI doctors are involved in diagnosis and adopted by the healthcare system, how should the legal chain of responsibility behind them be divided?
“AI public servants”: who will endorse the official documents?
The “AI Numerical Intelligence Employees” piloted in Shenzhen’s Futian District have been widely discussed as “AI civil servants” for their standardized work of drafting official documents, analysing data, and responding to questions and answers. In reality, however, they do not have any administrative power, and all of their work is supervised and reviewed by designated civil servants. All parameters, tasks and outputs of these AI employees are monitored, and civil servants have the right to intervene and make changes at any time, ensuring that AI participation is “controlled”.
An AI law firm? It can draft paperwork, but it can’t give “judgment.”
AI has been widely used in the legal industry, assisting in drafting contracts, analyzing cases, and generating legal documents. But as the Beijing Internet Court’s “virtual judge” shows, AI can participate in process management, but it can never make the final decision.
John Roberts, the 17th Chief Justice of the United States, emphasized in 2023 that AI “clearly has enormous potential to dramatically improve access to critical information for lawyers and non-lawyers alike, but any use of AI will require caution and humility” because of the risk of “violating privacy rights and dehumanizing the law.”
AI is no longer an outsourcing tool, but is becoming “part of the system” in industries such as healthcare, government, and law. Large-scale language models are being platformized into business processes, from data comprehension and document generation to intelligent recommendations, with deeper and deeper impact.
As a result, some cross-industry and systemic ethical issues have emerged, not as a side effect of technological advances, but as an inevitable challenge at the intersection of technology and systems:
01. The “vanishing zone” of the chain of responsibility
As AI outputs are widely used in medical advice, policy recommendations, or legal documents, their role is becoming closer and closer to that of a “judge”. But when these outputs are biased or wrong, the public often has no way to assign accountability.
The AI itself has no legal capacity, and it is sometimes difficult for a human operator to review each specific action. The chain of responsibility is gradually blurred in the “system automation”, forming a grey area that is difficult for the law to pursue. This ambiguity makes human responsibility, which should be clear, begin to recede behind a technological shell.
02. The marginalization of professional judgment
AI was originally introduced to improve efficiency, but in practice, its high accuracy and fast response often make human professionals dependent. The original need for subjective judgment and empirical support has gradually evolved into “confirming whether the AI is correct” rather than “making independent judgments”. Although the system still retains the safeguard mechanism of “human signature” at the end, substantive thinking and judgment may have given way to tacit acceptance. Such a trend, over time, will progressively diminish the role of humans in critical decision-making and undermine the foundations of professional accountability.
03. Lack of public right to know and choose
The use of AI is becoming more and more covert, embedded in all kinds of processes: you may not realize that the initial medical advice you receive, the response you receive from the government, or the draft of the contract you get, has been partially or fully generated by AI. In such cases, users are often unable to tell if they are interacting with AI, and have no way to choose whether or not to accept the service model.
When technology deployers deliberately downplay the presence of AI out of efficiency or cost considerations, the right to know and the right to choose are sacrificed. This not only raises questions of procedural fairness but also challenges the public’s basic expectation of transparency in the system.
From healthcare to government to law, AI is rapidly penetrating all kinds of decision-making processes. It is bringing efficiency and reshaping judgment mechanisms. As AI becomes “part of the system,” related legal and compliance issues arise – including how to clarify the responsible parties, how to prevent algorithmic discrimination and data abuse, and how to ensure that human judgment is not marginalized in high-risk scenarios.
The complexity of these issues lies not in the violation of the law, but in the fact that institutional preparation lags behind technological evolution. In such an uncertain stage, it is all the more necessary to maintain a clear perception of the boundaries of responsibility. Only when the boundaries are clearly enough drawn can AI become a reliable assistant rather than a potential risk amplifier.
Written by Xueying Yang; Content planning: Zhou Yan; Xueying Yang; Proofreading: Sun Gang
The content of this article is based on publicly available information and the author’s understanding, and does not constitute any form of professional legal advice or basis for business decisions. Readers should refer to this article in the context of their own actual situation and consult relevant professionals for specific guidance. The author and the publishing platform do not assume legal responsibility for any consequences arising from the use of the information in this article.
Consultation with Specialized Lawyers

Abraham Sun
Principal Solicitor
As the Principal Solicitor, Abraham has been working with numerous clients including listed companies, state-owned enterprises, ultra-high-net-worth clients, and investment banks. Customers in various industries including Australian and Chinese companies and individual investors, had achieved considerable economic benefits with his professional legal advice.

Annette Leung
Partner, Solicitor, Notary Public
Annette is an experienced lawyer who works with clients in a wide range of commercial and civil disputes, with a particular focus on marriage and family affairs. Also, her experience extends to assisting clients in other common law countries.

Amy Zhu
Partner, Senior Licensed Conveyancer
Amy is an experienced licensed conveyancer with years of experience in conveyancing matters. She has outstanding work experience and achievements in conveyancing services under property law and conveyancing law provisions. She is skilled in working with clients in Mandarin and English.

Ming Zhao
Partner, Solicitor
Ming is proficient in immigration law and has over 20 years of experience in this area of law. He specialises in business skills migration, employer nomination scheme, employer nomination migration in regional areas, etc. Also, Ming is highly experienced in all areas of criminal defence, including matters involving drink driving, drive while disqualified/suspended, etc.
Latest Posts:
- When AI is integrated into hospitals, public service systems, and law firms: where are the boundaries when multi-domain systems converge?
- Does fighting back from an attack in New South Wales (NSW) count as self-defense?
- Sydney Chinese Assault Incident Escalates Again: When Racist Sentiments Turn into Real Harm, How Should We Respond?
- Tackling Money Laundering Risks: A Complete Strategy for Corporate Anti-Money Laundering Compliance
- Stuck in the Process of Rebuilding/Renovating Your Home? Do You Have this Statutory Insurance Document Ready?
- 2025 Australian Federal Election, Do You Really Know Your Vote?