Search

Hongke's latest articles

HongKe

Add your title text here

I. Introduction: Strategic Changes at the Data Layer under the Critical Infrastructure Legislation

[Hongke Solutions] EU AI Act: How AI Literacy Training for Enterprises can be Realized

With the EU AI Act coming into force and entering into phased application, AI governance is moving from a 'self-regulatory proposal' to a 'regulatory imperative'. The Act adopts a gradual approach in terms of timeline: the AI Act comes into force on August 1, 2024, and is expected to be fully applicable on August 2, 2026; the "Prohibited AI Practices" and the "AI Competence Obligations" have been applicable since February 2, 2025, so companies must be able to account for the adequacy of their employees' AI competence and the measures taken by the company now.

I. The core requirement of the bill: AI literacy is "to do and to continue to do".

Article 4 of the EU AI Act explicitly requires that providers and deployers of AI systems must "take measures" to ensure, within their capabilities, that their employees and those who operate/use AI systems on their behalf have an adequate level of AI literacy, taking into account their technical background, experience, education and training, as well as the context in which the AI system is used and the target audience affected. This takes into account their technical background, experience, education and training, as well as the context in which the AI system is being used and who is being impacted.
 
The key here is not just to "teach a lesson", but to be able to go back to the language of management: do you have the segmentation design, do you have the frequency and tracking, do you have the training content aligned to the actual usage situation, and can you justify it when asked by the auditor/authority.

Turning AI Knowledge into Compliance: A Four-Level Training Framework

In order to make AI literacy training effective, it is recommended to use a four-layer structure of "from shallow to deep, from knowledge to behavior", which is in line with the idea of "according to the person, according to the situation, according to the target of influence" as required in Article 4.
 
  • Basic Cognitive Layer: Establish basic AI concepts, capability boundaries, and common misunderstandings to reduce operational and compliance risks caused by "over-trust" or "misuse of tools".
  • Risk Identification Layer: Enable employees to look at AI usage scenarios from a compliance risk perspective and know which scenarios are particularly sensitive and require upgraded controls.
  • Compliance Operations: Write company rules into actionable processes (e.g., which tools to use, which data not to lose, which outputs to manually review, and when to go through the approval process), so that the "policies" can really be put into practice.
  • Culture of Responsibility Layer: Transparency, fairness, accountability, and human oversight become work habits, especially in AI-assisted decision-making contexts.

Role differentiation: different departments have to learn different things about the same statute.

The essence of Article 4 is to "customize the curriculum to suit the needs of each individual". Therefore, it is recommended that the syllabus and depth of the course should be divided by roles, so as to avoid the same set of teaching materials for the whole company, which will lead to "learning but not using".
 
  • Employee-wide: Focus on safety and compliance guidelines for the day-to-day use of AI tools (e.g., sensitive data, output auditing, error and bias awareness).
  • Technical/Data teams: Enhance how compliance requirements become verifiable control points (risk of bias, record retention, transparency and governance interface) during development, deployment and maintenance.
  • HR/Legal Affairs: For highly sensitive decision-making scenarios such as recruiting and performance, strengthen the sensitivity of regulations and internal control processes on the matter of "people will be affected by AI".
  • Management: learns about governance: how to set up training systems, division of responsibilities, audit evidence and cross-departmental collaboration mechanisms to ensure that the company is really "taking action".

Use KnowBe4 for "Manageable, Trackable, Auditable" Training Operations

If an organization is already security-conscious or has a compliance training program in place, merging AI literacy into an existing platform is often the least laborious and easiest way to form a chain of evidence.
 
In terms of KnowBe4's Compliance Plus direction, it focuses on the delivery of compliance courses using KnowBe4's training platform and offers short, interactive modules, automated training activities and report tracking, with an emphasis on regular updates, so that you can turn training into a 'going concern' rather than a one-off project.
 
In addition, it is publicly available information that Compliance Plus has a library of "over 500 modules" and focuses on customizability, continuous updates and integration with the KnowBe4 platform for delivery, a structure that is more in line with the "contextual and personnel-driven" approach desired in Article 4.

Other Articles

Hongke Case

Hongke Solution] Hongke High Fidelity HIL Simulation Solution - L3/L4 Autonomous Driving Test and aiSim Simulation Platform

HONGKEI's high-fidelity HIL (Hardware-in-the-Loop) simulation solution is based on the aiSim simulation platform, which supports L3/L4 autonomous driving test, multi-sensor simulation, and SiL/MiL/HiL verification, providing a high-confidence intelligent driving test environment for OEMs, Tier1s, and autonomous driving technology enterprises.

Read more
Hongke Case

HongKeys Solution] How to land CRA compliance? Network security engineer perspective to bring you to understand the logic of compliance and ONEKEY security and compliance platform value.

With the EU's Cyber Resilience Act (CRA) coming into force, product security and supply chain transparency have become mandatory requirements for companies entering the European market, with the CRA requiring manufacturers to establish security mechanisms throughout the entire product lifecycle and to provide SBOM, vulnerability management, and evidence of compliance. With the gradual implementation of the EU Cyber Resilience Act (CRA), product security and supply chain transparency has become a mandatory requirement for enterprises to enter the European market, and the CRA requires manufacturers to establish a security mechanism throughout the product lifecycle and provide SBOM, vulnerability management, and evidence of compliance. The ONEKEY Safety and Compliance Platform helps enterprises to quickly complete compliance diagnosis and vulnerability management, and establish a traceable and verifiable product safety and compliance system.

Read more

Contact Hongke to help you solve your problems.

Let's have a chat