AI's Darkside and the Pending Legal Storm: Unmasking the Hidden Flaws of Artificial Intelligence

With the release of DeepSeek, I thought it important to address the risks of leveraging the model, or any model in truth, beyond the traditional FUD applied to anything which emanates from China. I delve into the operational considerations of leveraging any unknown/untested model; whether that be a LLM or SLM, to achieve your state of Gen AI bliss.

Why is this important now? Last Sunday, 2 February 2025 to be precise, the first rules of the European Union’s Artificial Intelligence Act started to apply to any entity that operates within the 27 Nation economic zone. Canada’s Artificial Intelligence and Data Act is also looming but is struggling to gain passage prior to the October 2025 elections which means it will automatically fail and have to be reintroduced with the new government. It’s also disappointing that there are no similar protections afforded to anyone, person or company, here in the US with the rescission of Presidential Executive Order 14110 on January 20, 2025.

Why is it important to you? In this example, the nexus of leveraging DeepSeek’s model in any way (other than downloading the open source version, modifying it and keeping the derivative local) and the EU’s AI Act means your company, most likely, will be violating the limited number of prohibited AI use cases.

How so? The EU’s AI Act in Chapter 2 Article 5 prohibits “the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;”

Ok but .. yeah .. um .. how? Chicago-based NowSecure has done a deep examination of how DeepSeek, in tandem with its iOS App, functions. The key pieces are that it communicates with ByteDance’s Volcengine where, minimally, data privacy and ownership is lost because the iOS app sends information in the clear to an intermediary rather than directly to itself. According to NowSecure, the iOS app actually collects and sends device information in the clear which is so advanced that it borders on the edge of advanced device fingerprinting. NowSecure continues with the fact that if this information is coupled with other generally available information such as data gathered from mobile advertising companies that the DeepSeek iOS app users could be deanonymized. Although this next bit must be confirmed, it is safe to assume that leveraging the hosted model through any of the available methods (web, API or App) will almost certainly imply the use of ByteDance's Volcengine given the architecture the iOS app appears to utilize.

Bottomline? Models matter beyond being ethically trained and in this universe there are nearly 7,000 “unsafe” and more than 6,000 “suspicious” AI models. Not to worry though, there are over 1,229,820 “safe” models at the time this article is being published. But which AI models are your organization using/testing?

Is your company prepared for AI and Gen AI security specifics? Do you understand the four categories of risk the AI Act defines, the Transparency Risk of the use of AI, and your associated disclosure obligations? Does your organization have auditable processes, systems and other tools which will support full compliance and make the proof of that compliance easy to provide authorities across the globe?

I’m sorry to say that the traditional systems/tools won’t cut it. There’s no vulnerability CVE for communicating valid information through an authorized system by an authenticated user.

I’m here to help as always. I’ve been living AI for over 8 years and the learning never ends. I’ve been spending a good amount of time on this especially considering the large gaps traditional tools have and the manual processes this forces without adaptation.

There are alternatives. DM me and we can get a conversation started.

Previous
Previous

AI and Design: Why Creativity Still Needs a Human Touch