AI
Note: This is a machine-translated version of the original German text. The translation was generated with AI assistance. In case of any discrepancy, the German original shall prevail.
Artificial intelligence is part of our work today – as it is for most companies. We want to be open about what we do with it and what we do not.
What we use AI for
Translations. PNZ communicates in numerous languages – internally in German, English and Hungarian, and externally also in Spanish, Finnish, Asian and other languages, depending on the market and partner. We consistently review AI translations by back-translation using a second, independent model – before any text is published or sent.
Technical summaries. We use AI assistance to distil complex subject matter – from research literature, regulatory documents or internal analyses – down to the essentials. The result is always reviewed by a subject-matter expert.
Plausibility checks and fact-checking. We have our own work reviewed against AI models – not as a substitute for our own judgement, but as an additional control for errors that one might overlook oneself.
Image generation. We use AI tools for visualisations and illustrations. In doing so, we take care not to depict recognisable individuals and not to reproduce copyrighted styles. All generated images are reviewed manually before use.
What we do not do
No customer data enters AI systems – neither order data, contact information nor communication records. Confidential business information and product formulations are kept out.
AI does not replace decisions at PNZ. It supports people who retain responsibility.
Why we do not label AI-generated content separately
What we publish is always the result of human work – AI is a tool in that process, like a dictionary or a search engine. A strict labelling requirement would suggest that we have "pure AI texts" – we do not. Every text, every image, every statement that comes from PNZ is the responsibility of a person at PNZ.
We monitor the development of regulatory requirements in this area and will adapt our practice as soon as new rules apply.
Our position
AI is useful. It is also error-prone, occasionally misleading and not neutral. We therefore treat AI output like a first draft: helpful as a starting point, not trustworthy without human review.
We are convinced that an open approach to these tools – including their limitations – is better than silence on the matter.