Search

Hongke's latest articles

HongKe

Add your title text here

I. Introduction: Strategic Changes at the Data Layer under the Critical Infrastructure Legislation

[Hongke Solutions] EU AI Act: How AI Literacy Training for Enterprises can be Realized

With the EU AI Act coming into force and entering into phased application, AI governance is moving from a 'self-regulatory proposal' to a 'regulatory imperative'. The Act adopts a gradual approach in terms of timeline: the AI Act comes into force on August 1, 2024, and is expected to be fully applicable on August 2, 2026; the "Prohibited AI Practices" and the "AI Competence Obligations" have been applicable since February 2, 2025, so companies must be able to account for the adequacy of their employees' AI competence and the measures taken by the company now.

I. The core requirement of the bill: AI literacy is "to do and to continue to do".

Article 4 of the EU AI Act explicitly requires that providers and deployers of AI systems must "take measures" to ensure, within their capabilities, that their employees and those who operate/use AI systems on their behalf have an adequate level of AI literacy, taking into account their technical background, experience, education and training, as well as the context in which the AI system is used and the target audience affected. This takes into account their technical background, experience, education and training, as well as the context in which the AI system is being used and who is being impacted.
 
The key here is not just to "teach a lesson", but to be able to go back to the language of management: do you have the segmentation design, do you have the frequency and tracking, do you have the training content aligned to the actual usage situation, and can you justify it when asked by the auditor/authority.

Turning AI Knowledge into Compliance: A Four-Level Training Framework

In order to make AI literacy training effective, it is recommended to use a four-layer structure of "from shallow to deep, from knowledge to behavior", which is in line with the idea of "according to the person, according to the situation, according to the target of influence" as required in Article 4.
 
  • Basic Cognitive Layer: Establish basic AI concepts, capability boundaries, and common misunderstandings to reduce operational and compliance risks caused by "over-trust" or "misuse of tools".
  • Risk Identification Layer: Enable employees to look at AI usage scenarios from a compliance risk perspective and know which scenarios are particularly sensitive and require upgraded controls.
  • Compliance Operations: Write company rules into actionable processes (e.g., which tools to use, which data not to lose, which outputs to manually review, and when to go through the approval process), so that the "policies" can really be put into practice.
  • Culture of Responsibility Layer: Transparency, fairness, accountability, and human oversight become work habits, especially in AI-assisted decision-making contexts.

Role differentiation: different departments have to learn different things about the same statute.

The essence of Article 4 is to "customize the curriculum to suit the needs of each individual". Therefore, it is recommended that the syllabus and depth of the course should be divided by roles, so as to avoid the same set of teaching materials for the whole company, which will lead to "learning but not using".
 
  • Employee-wide: Focus on safety and compliance guidelines for the day-to-day use of AI tools (e.g., sensitive data, output auditing, error and bias awareness).
  • Technical/Data teams: Enhance how compliance requirements become verifiable control points (risk of bias, record retention, transparency and governance interface) during development, deployment and maintenance.
  • HR/Legal Affairs: For highly sensitive decision-making scenarios such as recruiting and performance, strengthen the sensitivity of regulations and internal control processes on the matter of "people will be affected by AI".
  • Management: learns about governance: how to set up training systems, division of responsibilities, audit evidence and cross-departmental collaboration mechanisms to ensure that the company is really "taking action".

Use KnowBe4 for "Manageable, Trackable, Auditable" Training Operations

If an organization is already security-conscious or has a compliance training program in place, merging AI literacy into an existing platform is often the least laborious and easiest way to form a chain of evidence.
 
In terms of KnowBe4's Compliance Plus direction, it focuses on the delivery of compliance courses using KnowBe4's training platform and offers short, interactive modules, automated training activities and report tracking, with an emphasis on regular updates, so that you can turn training into a 'going concern' rather than a one-off project.
 
In addition, it is publicly available information that Compliance Plus has a library of "over 500 modules" and focuses on customizability, continuous updates and integration with the KnowBe4 platform for delivery, a structure that is more in line with the "contextual and personnel-driven" approach desired in Article 4.

Other Articles

Hongke Case

Hongke Solutions] Enterprise Mail Security from Crunchbase Leakage: How to Prevent Misdelivery and Sensitive Data Exfiltration?

In early 2026, Crunchbase confirmed a major data breach, proving once again that a single successful phishing attempt can result in hundreds of megabytes of file exfiltration. In addition to external attacks, organizations often overlook the risks of "misdirected mail" and internal outbound communications. This article analyzes the trend of phishing industry, compliance pressure (e.g., HIPAA, GDPR, GLBA), and provides a three-pronged protection strategy of integrated DLP, behavioral AI, and email encryption, which can help enterprises to stop the risk immediately before the data is sent.

Read more
Hongke Case

HONGKE Solutions] HONGKE ArangoDB Graph Database - The Power of Graph Data: Financial Anti-fraud from Rule Gaming to Relationship Insight

ArangoDB plays a key role in the financial anti-fraud space, helping organizations move from traditional rule-based approaches to deep relationship insights. In the face of increasingly complex and insidious fraud networks, traditional risk control systems have been overwhelmed by the lack of a global perspective, lagging rule updates, and challenges in cross-system integration, ArangoDB, through its unique native multi-model (graphs, files, vectors) capabilities, integrates fragmented customer information, transaction behavior, and device fingerprints to achieve an integrated analysis of "Relationships + Behavior + Patterns Identification". This not only significantly improves the speed and accuracy of fraud identification, reduces decision-making delays and maintenance costs, but also empowers financial institutions with the ability to proactively prevent and comprehensively understand risks, opening a new chapter in financial anti-fraud.

Read more

Contact Hongke to help you solve your problems.

Let's have a chat