Eventfold Logo
Published on
· 6 min read

The EU AI Act Deadline Is 118 Days Away — What Event Organizers Need to Do Now

Authors
  • avatar
    Name
    Lucas Dow
    Twitter

On August 2, 2026 — 118 days from today — the EU AI Act's provisions for high-risk AI systems take full effect across all 27 member states. Each member state must establish at least one AI regulatory sandbox by that date. National market surveillance authorities will begin actively monitoring AI systems in their markets. And organizations using AI that falls under the Act's scope will need to demonstrate compliance or face penalties of up to 35 million euros or 7 percent of global annual turnover.

If you run events in Europe, or run events anywhere that serve EU residents, this affects you. Not in an abstract "keep an eye on this" way. In a practical, "you need to do specific things before August" way.

What the AI Act Actually Regulates

The AI Act uses a risk-based classification system. Not all AI is treated the same.

Prohibited AI has been enforceable since February 2025. This includes manipulative AI systems, social scoring, and real-time biometric identification in public spaces. Unless you are running a surveillance operation disguised as an event, this category probably does not apply to you.

High-risk AI is where the August 2026 deadline matters. High-risk systems include AI used in employment decisions, credit scoring, law enforcement, education, and critical infrastructure. The Act defines these through Annex III, which lists specific use cases. Event management AI does not appear explicitly, but there are edge cases — particularly around automated decision-making that affects individuals.

Limited-risk AI includes chatbots, AI-generated content, and emotion recognition systems. The primary obligation here is transparency: users must know they are interacting with AI.

General-purpose AI covers foundation models like GPT, Claude, or Gemini. Providers of these models have their own obligations, but as an event organizer using them through a vendor's interface, your obligations are primarily around transparency and data governance.

Where Event AI Sits in the Risk Framework

Most AI tools used in event management fall into the limited-risk or general-purpose categories. But the boundaries are not always clean.

Clearly limited-risk:

  • AI chatbots answering attendee questions
  • Automated email drafting and response
  • AI-powered content generation for event descriptions
  • Sentiment analysis of post-event surveys

Potentially higher-risk depending on implementation:

  • AI that decides who gets access to an event (automated eligibility screening)
  • AI that assigns seating based on attendee profiling
  • Automated pricing that adjusts based on individual attendee characteristics
  • AI that screens speaker applications or vendor proposals

The distinction matters because higher-risk classification triggers requirements for documentation, human oversight, conformity assessments, and ongoing monitoring that limited-risk systems do not face.

Five Things to Do Before August

You do not need a legal team and a six-month project plan to prepare. But you do need to take specific actions.

1. Inventory Your AI

List every tool in your event workflow that uses AI. This includes obvious ones like chatbots and email agents, but also less obvious ones: your CRM's predictive lead scoring, your email platform's send-time optimization, your analytics tool's attendee segmentation, and any recommendation engine that suggests sessions or connections to attendees.

For each tool, document: what it does, what data it processes, who the vendor is, and what decisions it makes autonomously versus with human oversight.

2. Classify the Risk Level

Using the AI Act's Annex III categories, determine where each tool sits. Most will be limited-risk. But if any tool makes decisions that materially affect individuals — access decisions, pricing decisions, eligibility decisions — examine whether it triggers higher-risk obligations.

When in doubt, treat it as higher-risk. The compliance cost of over-classifying is documentation. The non-compliance cost of under-classifying is a fine.

3. Update Your Transparency Practices

The AI Act's transparency requirements are straightforward but specific. If attendees interact with an AI system, they must know it is AI. This means:

  • Chatbots must identify themselves as AI, not pretend to be human
  • AI-generated emails should disclose that they were drafted or assisted by AI
  • AI-generated content (event descriptions, speaker bios, session summaries) should be labeled
  • If you use emotion recognition or biometric categorization at events, attendees must be informed before the processing happens

This does not mean every automated email needs a banner saying "WRITTEN BY ROBOT." It means your communication policies and privacy notices need to be honest about where AI is involved.

4. Verify Your Vendors

Your AI vendors are your partners in compliance. Ask them directly:

  • How do they classify their AI system under the EU AI Act?
  • What documentation do they provide for transparency and accountability?
  • Where is attendee data processed, and is it shared with third-party model providers?
  • Do they offer audit logs of AI decisions?
  • Can attendees request human review of automated decisions?

A vendor who cannot answer these questions by April 2026 is unlikely to be compliant by August 2026. That is information you need now, not in July.

5. Establish a Human Oversight Process

Even for limited-risk AI, the Act emphasizes human oversight. This does not mean a human must approve every AI action. It means there must be a mechanism for humans to intervene, review, and override AI decisions when necessary.

In practical terms for event management:

  • Someone reviews AI-generated attendee communications before they go to sensitive audiences (VIPs, speakers, sponsors)
  • There is a process for attendees to escalate from AI to human support
  • Automated decisions about access, pricing, or eligibility can be reviewed and reversed by a human
  • AI actions are logged so that post-event review is possible

The GDPR Advantage

If you already operate under GDPR — and if you run events involving EU residents, you do — you have a significant head start. GDPR's principles of data minimization, purpose limitation, and individual rights overlap meaningfully with the AI Act's requirements.

The AI Act adds a new layer: not just how you handle data, but how the AI system itself behaves. It asks questions about reliability, transparency, and accountability that GDPR does not address directly. But the organizational muscle of GDPR compliance — data processing records, impact assessments, vendor due diligence, individual rights mechanisms — translates directly into AI Act readiness.

European event organizers who treated GDPR as a competitive advantage rather than a burden are best positioned for this transition. The same discipline that made you good at data protection makes you good at AI governance.

What Happens After August

Enforcement will not be instantaneous. National authorities are still establishing their AI oversight structures, and there will likely be a period of guidance and adjustment before aggressive enforcement begins. But the legal obligations are real from day one, and the reputational risk of being caught unprepared is significant in an industry built on trust and attendee relationships.

The smarter play is not to wait for enforcement and react. It is to build compliance into your AI adoption process now, so that every new tool you add is evaluated against the framework from the start.

118 days is not a lot of time. But it is enough time to do this right — if you start now.