AI Transparency Report

ELEVO AI is committed to responsible AI use in compliance with the EU AI Act (Regulation 2024/1689)

Last updated: April 2026

Our AI Systems

ELEVO AI is a business platform that provides 60+ AI agents to help small and medium businesses with marketing, sales, content creation, accounting, legal guidance, and operations. Our AI systems are powered by Anthropic’s Claude models.

Key Facts:

  • • We use Anthropic Claude (Sonnet and Opus) for all AI processing
  • • We do NOT train AI models on your data
  • • All data is processed in the EU (Supabase EU region)
  • • AI outputs are advisory — human decisions always take precedence

Risk Classification Summary

Under the EU AI Act (Regulation 2024/1689), AI systems are classified by risk level. ELEVO classifies every agent:

Minimal Risk

Internal tools with no impact on rights or decisions. Examples: task management, research, design.

Limited Risk

Content generation and analysis tools. Transparency disclosure required. Examples: social media, SEO, marketing.

High Risk

Agents affecting legal, financial, or employment decisions. Human oversight recommended. Examples: legal advisor, accountant, CEO strategist.

Data Practices

What data is collected:User inputs, AI outputs, usage metrics, conversation history (with user consent).
How data is stored:Supabase PostgreSQL in EU region. Encrypted at rest and in transit.
Retention period:Audit logs retained for 3 years per EU AI Act requirements. User data deleted upon account deletion.
Who has access:Only the individual user and ELEVO admin (for compliance monitoring). No third-party access.

ELEVO PA™ (Aria) — Orchestration

Aria is the platform’s AI personal assistant and orchestrator. She coordinates all 60+ specialist agents, runs daily intelligence briefings, and manages the autonomous execution system.

Autonomous Execution Risk Levels:

  • Low risk (auto-execute): Content generation, reports, analysis, task creation
  • Medium risk (auto-execute + notify): Social media scheduling, CRM follow-ups
  • High risk (requires human approval): Sending emails, publishing live, financial actions

All autonomous actions are logged with full audit trail. High-risk actions require explicit user approval via Telegram or dashboard.

Human Oversight

For high-risk agents (legal, financial, strategic), ELEVO provides clear transparency notices and recommends human oversight. AI outputs are always presented as recommendations, never as autonomous decisions. Users maintain full control and can override any AI suggestion. The autonomous execution system enforces this by requiring explicit approval for all high-risk actions.

Contact

Data Protection Contact: team@elevo.dev

Spanish Data Protection Authority (AEPD): www.aepd.es

AI Transparency Report — ELEVO AI | ELEVO AI™