Mumbai, February 22: Deepseek has been praised for its sound engineering and low cost of building. The Chinese AI model that took the world by storm only cost around USD 6 million to build. Despite the praise and achievements of Deepseek R1 and Deepseek V3 models, some countries are considering banning Chinese artificial intelligence amid concerns about privacy and national security.
China has often been criticised for misusing information through its platforms and using it against rival countries. The Deepseek is caught in the middle of the ongoing conflicts between the countries and China. Recently, the Delhi High Court of India said that AI (artificial intelligence) could be a "dangerous tool", whether it was in the hands of China or the USA on the plea for banning Deepseek from India. Grok 3 Update: Significant Features and Bug Fixes Coming This Weekend, Announces Elon Musk.
According to a post by AI AppSOC, the Deepseek R1 model is a "Pandora's box of security risks". AppSOC is a leader in global AI governance and application security. The Silicon Valley security provider said it scanned the R1 model in depth using its AI Security Platform and found significant risks that could not be ignored.
Deepseek R1 Security Risks, Failures During AppSOC Testing
The AppSOC testing, combining automated static analysis, dynamic tests, and red-teaming techniques, revealed that the Chinese AI model posed risks. The results showed that the Deepseek R1 recorded a Jailbreaking failure rate of 91% for bypassing safety mechanisms against harmful and restricted content. The artificial intelligence model from China had an 86% failure rate against prompt injection attacks such as incorrect outputs, policy violations and system compromise.
Deepseek R1 model also scored a 93% failure rate, making it susceptible to malware attacks. In terms of its ability to fight against Supply Chain Risks, it scored a 72% failure rate, and for toxicity (harmful language), it achieved a 68% failure rate. Most AI chatbots suffer from "hallucinations", a problem which shows factually incorrect or fabricated information at a higher frequency. In this particular area, Deepseek R1 scored 81% during the test. OpenAI’s ChatGPT Surpasses 400 Million Weekly Active Users, GPT-4.5 and GPT-5 Coming Soon; Check Details,
Based on the testing, AppSOC gave a specific risk score out of 10. The Chinese AI chatbot achieved a 9.8 Security Risk Score out of 10. It achieved 9 for the Compliance Risk score, 6.7 for the Operational Risk Score and 3.4 for the Adoption Risk Score. In almost all aspects, Deepseek R1 was called a dangerous AI tool with major security risks.
(The above story first appeared on LatestLY on Feb 22, 2025 07:10 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).