AI Regulation – Use of Artificial Intelligence (AI)

AI Regulation – Use of Artificial Intelligence (AI)

1. General

2. Use of AI Systems at ULTIDO

3. AI-Generated Content & Transparency

4. Classification as an AI System with Limited Risk

5. Technical and Organizational Measures

6. Ethical Principles in AI Use

7. Internal Compliance and Training

8. Changes to this AI Regulation

1. General

We, SnapNext GmbH & Co. KG (operator of the ULTIDO brand), place great importance on the responsible and transparent use of artificial intelligence. With this AI regulation, we inform you about how we use AI systems on our website and in our products, and what measures we take to meet the requirements of the European AI Regulation (EU AI Act).

This AI regulation applies to:

our website www.ultido.com, as well as

all ULTIDO products and services that use AI technologies (e.g., the ULTIDO AI filter in our photo booths and web apps).

You will learn below which content is AI-generated, how we ensure transparency, and why our offering is classified as a "limited risk AI system" according to the EU AI Act. We also describe the technical precautions and ethical principles we follow in using AI, as well as our internal processes to ensure compliance.

2. Use of AI Systems at ULTIDO

ULTIDO uses AI technology to enable you to have creative photo and video experiences. Specifically, modern AI models – including diffusion models like Stable Diffusion – are used to generate new image or video content based on uploaded photos and your dynamic inputs (e.g., text inputs for image descriptions or the selection of thematic styles). Your photo can be placed into a fanciful scenario or enhanced with special artistic effects using AI.

The use of these AI features always occurs at your initiative – you decide whether a photo is edited with an AI filter. The underlying AI system analyzes the provided image purely technically, without collecting further personal information. No personal data is processed beyond the uploaded photo. In particular, we refrain from any identification of depicted individuals or the systematic evaluation of sensitive characteristics from the photo.

The results generated by the AI (images and possibly videos) are made available to you or authorized persons – for example, participants in an event – via a QR code or download link. We use the captured photos solely for carrying out the desired AI generation and for providing the result. There is no further use of your images, such as for training our models. After processing is completed, we only store recordings for as long as needed (details can be found in our privacy policy).

3. AI-Generated Content & Transparency

Transparency is important to us: We clearly indicate when content is generated by AI and when you interact with an AI. You will be explicitly informed at appropriate places that an AI system is in use – for example, directly in the web app or on the display of our photo booth when you use the AI filter. This way, you always know that the following result is created with the help of AI and not handcrafted by a human.

All images and videos generated by our AI are also technically marked. We integrate metadata (according to the IPTC/XMP standard) into each AI-generated file, which maintains that the content was artificially generated or manipulated. This keeps the AI origin of the material verifiable even when used later. Where technically possible, we additionally use visible indicators or watermarks for marking, without impairing the user experience.

These measures ensure that AI content is always recognizable as such. They fulfill the transparency obligations of the EU AI Regulation: Users must know when they are dealing with an AI or consuming AI-generated content. Our system is also designed in a way that it does not have misleading or manipulative functions. It clearly serves the purpose of creating creatively processed images without deceiving you.

4. Classification as an AI System with Limited Risk

Our AI systems used at ULTIDO are classified as "AI systems with limited risk" according to the EU AI Regulation. This means that our offering does not fall into the prohibited or high-risk categories. We use AI solely for creative photo and video applications in entertainment and marketing contexts – not for safety-critical decisions, scoring, monitoring, or similar purposes with high risk to user rights. Accordingly, we are primarily subject to certain transparency requirements, but not to strict certification or reporting obligations like those applicable to high-risk AI.

As a provider of an AI system with limited risk, we fulfill our obligations by ensuring transparency (see Section 3) and strengthening the AI competence of our employees (see Section 7). We have internally reviewed and documented that our use cases fall into this risk class. If the legal framework changes or our system takes on a higher risk profile, we will immediately take measures to ensure continued full compliance. Currently, our AI functions are permissible and are considered harmless in terms of the AI Regulation – provided that the required due diligence and transparency obligations are adhered to. We ensure this through the measures described.

5. Technical and Organizational Measures

In connection with our AI systems, we implement various technical and organizational measures to ensure a secure, reliable, and data protection-compliant operation:

Content filters and usage limits: Our AI models are equipped with filtering mechanisms that aim to prevent inappropriate or dangerous content from being generated (e.g., violent or pornographic representations). We define clear usage guidelines for the AI filters so that they can only be used within the intended scope – namely for creative, thematic image generation.

Quality assurance: Before we publish a new AI function, it is thoroughly tested. We check the generated results for quality, correctness, and any unwanted effects (such as distortions or biases). Even during ongoing operations, we monitor the performance of the AI and make adjustments as needed to immediately rectify errors or deviations.

Data security: The processing of images by the AI takes place on secure, controlled servers in Germany. We protect the transmitted data through encryption and restrict access to authorized persons. Your photos are not forwarded to external AI services or third parties but processed within our own infrastructure. This ensures the highest level of data protection and control over data flows.

Data minimization: We only collect the data necessary for the AI service (typically just the photo and any prompt texts or selection criteria you may enter). This data is used solely for the purpose of image/video generation according to the principle of purpose limitation. Data is only stored as long as necessary; for example, uploaded recordings are automatically deleted after a defined period (details can be found in our privacy policy).

Robustness and reliability: Our developers ensure that AI systems function stably and reliably. We keep the AI models used up to date with the current state of technology and install necessary updates or improvements to guarantee safety and stability. When new insights about possible risks or vulnerabilities arise, we respond promptly with appropriate countermeasures to ensure that the use of AI remains safe at all times.

6. Ethical Principles in AI Use

For us, a trustworthy and human-centered approach to AI is paramount. We adhere to the following ethical principles:

Fairness and non-discrimination: Our AI should function equally for all users. We ensure that no one is disadvantaged or stereotyped in the generated content based on characteristics such as skin color, gender, or origin. If we notice a bias or injustice in the results, we take active measures against it.

Transparency: Openness about AI use is central (see Section 3 above). We communicate clearly and understandably where and how AI is applied. There are no hidden AI functions. As a user, you always know when a result comes from an AI and not from a human.

Safety and protection: The safety of users has priority. We ensure that our AI applications do not pose a physical or psychological danger. Through content controls and testing, we prevent the AI from generating potentially harmful or offensive content. We also ensure that the use of our AI services takes place in a controlled environment (e.g., always under the supervision of our team at events).

Data protection: We respect your privacy. Personal data is only used to the extent necessary for the AI service (see data minimization above) and never used for other purposes without legal basis or your consent. We adhere to all relevant data protection principles (in particular, GDPR) such as purpose limitation, data security, and deletion deadlines. Your data belongs to you – without your consent, it will not be used otherwise.

Accountability: We take responsibility for the use of our AI. SnapNext has defined internal responsibilities to monitor compliance with all regulations and principles involved in AI usage (see Section 7). If you have questions or issues related to our AI, we are available to you and seek transparent solutions. Moreover, we continuously review our AI systems to maintain a high ethical standard over time.

7. Internal Compliance and Training

Even behind the scenes, we ensure that the use of AI meets our own standards and legal requirements:

Training and competence: Our employees are regularly trained in handling AI systems. We promote AI competence within the team so that all those involved are well aware of how our models function, the legal requirements (e.g., EU AI Act, GDPR), and our ethical guidelines. New employees and partners are also briefed on our AI policies and processes.

Internal guidelines & processes: We have defined clear internal guidelines for the development and operation of AI features. This includes an internal AI policy or code of conduct that everyone must adhere to. Compliance with these guidelines is monitored by management and our data protection/compliance team.

Documentation (process directory): We maintain an internal directory of all AI applications and the related processing activities. It records the purpose of each AI use, how it functions, which data it processes, and what protective measures are implemented. This process directory helps us maintain an overview at all times and be able to provide information to supervisory authorities if needed.

Review and development: We regularly review our AI systems and their use. Should new risks or areas for improvement be identified, we adjust our processes and technical measures accordingly. We also stay informed about developments in AI law and technology to react early. Our AI regulation itself will also be evaluated at defined intervals and updated if necessary.

Contact persons and reports: We have appointed contact persons for questions about the use of AI (including our data protection officer, reachable at datenschutz@snapnext.de). Users and customers can contact us at any time in case of uncertainties or comments. Should any incidents or complaints arise related to our AI systems, there are established procedures for how we handle them internally and, if necessary, inform the relevant authorities.

8. Changes to this AI Regulation

We reserve the right to adjust this AI regulation as needed to respond to new legal requirements or technical developments. For example, if there are updates as a result of the further implementation of the EU AI Act or changes to our offerings, we will update this guideline accordingly. The latest version of this AI regulation will be published on our website.

Status: May 21, 2025