AI Literacy Policy
Fulfilling Article 4 of Regulation (EU) 2024/1689 — ensuring AI literacy for staff, users, and integrators.
1. Purpose & Scope
Article 4 of Regulation (EU) 2024/1689 (“EU AI Act”), in force since February 2, 2025, requires that providers and deployers of AI systems take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. This obligation considers the technical knowledge, experience, education and training of the relevant persons, and the context in which the AI systems are to be used.
This policy establishes how UBava OÜ (“UBava”) fulfills that obligation as both a provider (of HiveGuard autonomous agents and proprietary AI subsystems) and a deployer (of third-party foundation models via the Privacy Relay).
This policy applies to:
- All UBava employees, contractors, and consultants
- Users of the UBava Privacy Relay platform
- Users of HiveGuard autonomous agents
- Third-party integrators consuming UBava APIs
This policy does not replace the separate AI Act Compliance Statement, GDPR compliance documentation, or the VHH Privacy Air-Lock Protocol technical specification. It supplements them.
2. Definitions
These definitions align with Article 3 of Regulation (EU) 2024/1689:
| Term | Definition |
|---|---|
| AI system | A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. |
| Provider | A natural or legal person that develops an AI system or a general-purpose AI model, or that has an AI system or a general-purpose AI model developed, and places it on the market or puts it into service under its own name or trademark. UBava is a provider of the HiveGuard agent suite and the PII Detection and Synthetic Data systems. |
| Deployer | A natural or legal person that uses an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. UBava is a deployer of Claude (Anthropic), GPT (OpenAI), Gemini (Google), and Grok (xAI) via the Privacy Relay. |
| User | Any natural person who interacts with UBava AI systems, including end users of the Privacy Relay and operators of HiveGuard agents. |
| AI literacy | The skills, knowledge, and understanding that allow providers, deployers, and affected persons to make informed decisions regarding AI systems, taking into account their rights and the functioning, capabilities, and limitations of AI. |
3. AI Systems Operated by UBava
3.1 Privacy Relay (Deployer)
UBava deploys the following third-party foundation models through its VHH Privacy Air-Lock protocol:
| Model | Provider | UBava Role | Data Handling |
|---|---|---|---|
| Claude (claude-sonnet-4-20250514) | Anthropic PBC | Deployer | Synthetic data only — PII tokenized before relay |
| GPT (gpt-4o) | OpenAI Inc. | Deployer | Synthetic data only — PII tokenized before relay |
| Gemini (gemini-2.0-flash) | Google DeepMind | Deployer | Synthetic data only — PII tokenized before relay |
| Grok (grok-3) | xAI Corp. | Deployer | Synthetic data only — PII tokenized before relay |
No real personally identifiable information leaves the EU boundary. The VHH Privacy Air-Lock (Docker 1) tokenizes PII locally on Hetzner infrastructure in Nuremberg, Germany, before any query reaches a third-party model.
3.2 HiveGuard Autonomous Agents (Provider)
| Agent | Function | Autonomy Level | Human Oversight |
|---|---|---|---|
| Scout Agent | Reconnaissance and data gathering across defined perimeters | Semi-autonomous | Results queued for human review |
| Analyst Agent | Pattern analysis, threat correlation, intelligence synthesis | Semi-autonomous | Findings require human validation before action |
| Fixer Agent | Self-healing — monitors hive integrity, patches anomalies | Autonomous with constraints | Operates within Tumbler Vault boundaries; escalates to human on critical failures |
| WebScout Agent | Open-source intelligence gathering from public web sources | Semi-autonomous | Results queued for human review; cannot take external action |
All agents operate under the Bee Dance Protocol (BDP) for inter-agent communication and are bound by jurisdiction-aware autonomy constraints — no agent may take action that violates the law of its operating jurisdiction.
3.3 Proprietary AI Subsystems (Provider)
| System | Container | Function |
|---|---|---|
| PII Detection System | Docker 1 | Identifies and tokenizes personal data in user queries before relay to LLMs |
| Synthetic Data Generator | Docker 4 | Generates synthetic replacements for detected PII, enabling full query context without exposing real data |
4. User Literacy Requirements
Before using any UBava AI system, users must understand and acknowledge the following:
4.1 AI Output Limitations
- AI systems can produce incorrect, incomplete, or misleading outputs. This includes factual errors (“hallucinations”), outdated information, and responses that reflect biases present in training data.
- AI outputs are not verified facts. They are probabilistic text generation. Users must apply their own judgment.
- AI outputs should never be the sole basis for legal, medical, financial, or safety-critical decisions. Professional advice from qualified humans is required for such decisions.
4.2 Data Handling Transparency
- The Privacy Relay processes synthetic data only. The VHH Privacy Air-Lock tokenizes all PII before queries leave UBava infrastructure. Third-party AI models never see real personal data.
- Tokenization is not anonymization. The mapping between real and synthetic data exists within UBava’s secure infrastructure and is governed by GDPR data processing agreements.
- Users should not attempt to embed sensitive data in ways designed to bypass PII detection (e.g., encoding PII in base64, splitting identifiers across multiple messages).
4.3 Autonomous Agent Operations
- HiveGuard agents operate autonomously within defined boundaries. They can gather information, analyze patterns, and in the case of the Fixer, take corrective action — all without per-action human approval.
- Human oversight is maintained through the approval queue. Significant actions, new findings, and anomalies are escalated to human operators before execution.
- Agent lifecycle states (DORMANT, RUNNING, HIBERNATING, PAUSED, FROZEN, ESCALATED, RESTING, DECOMMISSIONED) determine what actions an agent can take. Users operating agents must understand these states.
4.4 User Responsibility
- Users retain full responsibility for how they use AI outputs. UBava provides tools; users make decisions.
- Users must not represent AI-generated content as human-authored in contexts where such disclosure is required by law or professional standards.
- Users must report suspected AI system malfunctions, harmful outputs, or security incidents to info@ubava.ee.
4.5 Disclosure Mechanism
Users are informed they are interacting with AI systems through:
- Platform interface labeling identifying AI-generated responses
- API response metadata indicating the model used
- Terms of Service (Section 9.2) disclaiming AI accuracy guarantees
- This policy document, available at /legal/ai-literacy
5. Staff Training Requirements
All UBava staff who develop, deploy, operate, or support AI systems must complete the following:
5.1 Mandatory Training (All Staff)
| Topic | Content | Frequency |
|---|---|---|
| EU AI Act Fundamentals | Risk classification, prohibited practices, transparency obligations, Article 4 literacy requirements | On hire + annual refresher |
| UBava AI Architecture | Privacy Relay pipeline, VHH Air-Lock, HiveGuard agent framework, Docker topology | On hire + when architecture changes |
| AI Limitations | Hallucination, bias, prompt injection, data leakage vectors, overreliance risks | On hire + annual refresher |
| Incident Response | How to identify, report, and escalate AI system failures or harmful outputs | On hire + annual drill |
5.2 Role-Specific Training
| Role | Additional Training |
|---|---|
| Engineers | Model integration security, prompt injection defenses, PII detection accuracy testing, HiveGuard agent lifecycle management |
| Operations | Agent monitoring dashboards, approval queue management, escalation procedures, system health metrics |
| Customer Support | Explaining AI limitations to users, handling AI-related complaints, recognizing when to escalate to engineering |
| Legal / Compliance | AI Act regulatory updates, GDPR-AI intersection, Data Protection Impact Assessments for AI systems, liaison with Estonian DPA |
5.3 Training Records
Training completion is documented with date, participant, and topic. Records are retained for the duration of employment plus 3 years, in accordance with Estonian labor law and AI Act accountability requirements.
6. Prohibited Uses
The following uses of UBava AI systems are prohibited. These align with Title II (Prohibited AI Practices) and Annex III (High-Risk AI Systems) of Regulation (EU) 2024/1689:
6.1 Absolutely Prohibited (Article 5)
- Social scoring — Using AI outputs to evaluate or classify individuals based on social behavior or personal characteristics, leading to detrimental treatment
- Subliminal manipulation — Using AI to deploy subliminal techniques that materially distort behavior in ways likely to cause harm
- Exploitation of vulnerabilities — Targeting specific groups (age, disability, social/economic situation) with AI-driven manipulation
- Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (except narrow exceptions not applicable to UBava)
- Emotion recognition in workplace or educational contexts
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
6.2 High-Risk Domain Restrictions (Annex III)
UBava AI systems must not be used as the primary decision-making system in the following domains without separate high-risk compliance assessment and registration:
- Employment and worker management (hiring, termination, task allocation, performance monitoring)
- Credit scoring and financial creditworthiness assessment
- Access to essential private or public services and benefits
- Law enforcement (individual risk assessment, polygraph, evidence evaluation)
- Migration and border control (risk assessment, visa processing)
- Administration of justice (judicial research for applying law to facts)
- Education (admissions, assessment, proctoring)
Users attempting to deploy UBava systems in these domains must contact legal@ubava.ee for a conformity assessment before proceeding.
6.3 UBava-Specific Prohibitions
- Using HiveGuard agents to conduct surveillance on individuals without lawful basis
- Circumventing the PII tokenization pipeline to expose personal data to third-party models
- Using AI outputs to generate spam, phishing content, or disinformation
- Deploying agents with jurisdiction-aware constraints disabled
7. Review Schedule
| Trigger | Action |
|---|---|
| Annual review | Full policy review by April of each calendar year, beginning April 2027 |
| New AI system added | Policy updated before system goes into production |
| Regulatory change | Policy updated within 60 days of relevant EU AI Act delegated/implementing acts entering into force |
| Significant incident | Post-incident review to determine whether literacy gaps contributed; policy updated if so |
| Staff feedback | Material suggestions reviewed at next quarterly compliance meeting |
Version history is maintained in Git. All changes to this document are tracked with commit messages referencing the change rationale.
8. Contact
For questions about this policy, AI literacy training, or to report concerns about AI system use:
- General inquiries: info@ubava.ee
- Legal & compliance: legal@ubava.ee
- Security incidents: See Incident Response documentation
UBava OÜ
Harju maakond, Tallinn, Estonia
Registry Code: 16873498
This document fulfills the AI literacy obligation under Article 4 of Regulation (EU) 2024/1689 (EU Artificial Intelligence Act). It is a living document maintained by UBava OÜ and updated as required by Section 7 above.