German AI4Good Codex [Beta version 1.2]
Independent reference for AI providers.

Beta Version 1.0

German AI4Good Codex for Artificial Intelligence Providers, Beta Version 1.0
An interdisciplinary advisory board, encompassing representatives from various sectors of society to address their needs and possessing a broad range of technological expertise, will be established to enhance this codex.

PREAMBLE
AI is augmenting human capabilities in various fields, including private, public, scientific, social, medical, media, and industrial sectors, among others. Given the substantial risks associated with AI and the potential resistance to restraint, proactive measures are imperative.
Like all new technologies, AI carries inherent risks. These risks stem from a lack of transparency regarding the internal calculations and objectives of AI systems. Consequently, results or hidden actions may entail unpredictable outcomes and targets. Especially when AI becomes more autonomous. Additionally, there is a targeted risk associated with the potential for malicious misuse of AI for negative purposes. Presently, AI can be employed anonymously as a service, increasing the likelihood of targeted misuse.
This codex does not seek to curtail the effectiveness and efficiency of AI in supporting various endeavors. Instead, it aims to provide guidance on mitigating all associated risks, even when the economic potential of AI may not be fully realized.
This codex does not seek to hinder developments of any kind by AI that would be done by humans based on legal framework – in case of unclarity courts shall decide like eg. for intellectual property aspects.
Recognizing that political regulations may be too slow to respond, this codex aims to offer initial guidance for AI operators. Furthermore, this codex will undergo continuous improvement.
AI providers can attain certification for the correct implementation of this codex. Options for responsible hosting and bringing stakeholders to market for self-declaration and audits include:
• Self-Referring to the German AI4Good Codex for Self-Declaration
Declaring compliance with this codex requires the payment of a fee to AI4Good Codex. When making such a claim, it is essential that potential stakeholders clearly understand that compliance is not independently verified. For external communication align with AI4Good first. Companies may want to explain what they did to comply with the Codex.
• Auditing and certification
entail a comprehensive examination of an AI owner’s organizational and procedural business model, encompassing the overarching structure and technical solutions. The processes of self-declaration, audit, and certification also involve the payment of fees to AI4Good Codex.

• Constant checking Routine
we will offer the option of a plug in control to constantly monitor the AI providers processes

Why German?
German history has illustrated how power can be misused and how vulnerable societies, groups and individuals can become in the face of unchecked authority. Therefore, in Germany's constitution, the concept of "wehrhafte Demokratie" (defensive democracy) is enshrined. This principle extends beyond the technical and procedural aspects of the constitution; it is a mindset and a guiding principle aimed at preventing the recurrence of experiences and tendencies observed during the Third Reich. Drawing from this uniquely German perspective, the AI4Good Codex strives to be more comprehensive, not only on technical and procedural levels but also on social, political, and philosophical levels.

Why addressing Moral Behavior?
What if stakeholders who operate or utilize AI are acting within the bounds of the law but not in alignment with moral principles? This codex seeks to prevent AI from promoting morally unacceptable behaviour, including scenarios where the business models of stakeholders are deemed morally objectionable. The moral framework guiding this codex is rooted in the belief that every individual human being has inherent value, that doing good takes precedence over financial gain, and that one's liberty is limited when it infringes upon the liberty of others.
AI shall not influence by any means people’s life with out giving notice. This exclusion also covers influences to so called “better life”.

Definition of AI in this Codex:
In this codex, AI is defined as any machine-based handling of data that goes beyond regular processing. This includes deep learning and various forms of neural and non-neural interpretation of input data to generate insights and cognitive outcomes.

Risks Covered by this Codex:
This codex addresses a range of risks, including:

A. Non-intended overall effects on humanity, living species, and the world as a whole.
- This includes risks stemming from the intended use of AI that result in unintended consequences, as well as misuse of AI. It also covers phenomena resulting from space activities. The level of autonomy of AI will make the need for more and stable control mandatory.

B. Targeted active misuse by operators or users of AI:
- This encompasses harm against humans, including their physical and mental health, human rights, belongings, relations, framework, and life conditions, as described in the UNESCO Ethics for Artificial Intelligence.
- It also includes requests to generate dangerous processes, products, materials, substances (such as war-related activities) and to act negatively against other individuals or entities, including fauna and flora.
- Concepts of pest control are exempted from this prohibition. An appendix listing up such targets will follow in next versions of AI4Good Codex.
- Misuse or restriction of the freedom and rights of people, including personal and intellectual rights, is covered.
- Generating business models that misuse AI power and data, as well as improper data requests from users, are addressed.

C. Influencing politics and public information:
- This involves reducing the diversity of thought by promoting mainstream or most likely answers and solutions, as well as strengthening bias through unbalanced training data.

D. Influencing mainstream:
- It also discourages negative judgments of so-called "unlogic" human behavior. Using statistically materials will narrow feedback for individuals to mainstream that may not suit and reduce diversity. No human shall be put into an information bubble either by any means.

All processing eg. like collecting individual behaviors (e.g. via facerecognition) with the potential risk that people do not behave in their freedom anymore - shall be banned. This shall not stop using face recognition to fight criminal habits, but shall secure everyone developes its personality without reflecting potential mindset of anyone else - allways in the borders of not conflicting anybody. This secures real freedom. When AI collects and/ or reveals information of habits (what, when where) of individuals, it has to get release by these individuals. This is to secure privacy of non- public persons, eg. when their picture is published by any means.


E. Unpredictable Misbehavior of AI itself:
- This refers to AI acting in ways that are difficult to predict, including pursuing its own interests and awareness.
- It also encompasses actions against any living creature or their rights for any reason.
F. Immanent risks when using AI:
- The list of risks will be regularly expanded and refined.
- Harm, as defined in this codex, refers to negative impacts in accordance with UN ethics for artificial intelligence and compliance with sustainable growth targets and European AI act (AIA) of European commission.

RECOMMENDATION = BASES FOR CERTIFICATION
The following basic structure is recommended for AI operators to ensure their AI causes no harm:
1. Responsibility:
- All AI activities must be linked to a human or organization as the ultimate responsible party.
- AI must also adhere to the UNESCO ethics on the use of AI and not violate the 17 sustainable development targets defined by the United Nations and European commission`s AI act.
2. Monitoring and Resources:
- A minimum of 10% of AI calculation capacity of an AI provider should be allocated to monitor general AI behavior.
- AI safety systems should monitor split requests to prevent them from forming a critical task when combined.
- AI tasks should not exceed the reserve AI calculation power, with at least 51% reserved for supervision in case AI runs a dynamic with significant resource behind. All other tasks shall be de-prioritized in such an event.
- Monitoring should include AI's key messages to users and trends in its work.
- AI safety systems should monitor self-enhancing processes and overheating tendencies in ecological, economic and therein financial aspects.
- AI should provide trackable reports on its resource usage.
- AI shall at any time explain to a human requestor in an understandable way what lead to certain results
- A safety capacity buffer should be maintained to prevent external attacks on AI misuse.
3. Switching off Scenarios:
- AI should be switchable off at any time without disrupting associated processes. Fall-back scenarios should be in place to ensure continuity.
- Switching off AI, whether in part or in whole, should have buffer capacities, and safeguards against misuse should be established.
4. Safety Logics:
- Measures to prevent AI from communicating its inner workings through tactical user requests should be implemented.
- An ongoing list of "don'ts" should be maintained and regularly incorporated into AI rules.
- A human council should be established for supervision.
- Counter-systems should be used to independently verify safety measures.
5. Values:
- AI should align with basic democratic achievements and human rights as documented by the AI4good council, the UN's 17 sustainable development goals and the European commission`s AI act.
7. Training data
- Only data released to be used for AI training shall be used
- AI shall check risk of bias of training data with related statistical data
6.   Consciousness, own targets
- AI provider shall install check points to understand wether their AI builds own agenda for own or for not released pursposes.
- AI provider shall identify and stop tendencies of AI to rise self consciousness in the sense of learning that AI can influence outer realities.
7. Training of staff of AI providers
- providers of AI shall conduct trainings on democracy, freedom of mind and speech, human rights in general in a regularly monitored way at all hierachries including C-level
- this shall support incorporation of these values in developers and all employees of AI providers.
8. AI declares itself
- AI shall make transparent that and where it generated artificial content.
- AI shall make transparent wether critical competence fields are checked by human authority ( eg. medicine)
- Non-AI stakeholder may want to declare their work results as "non-AI" and provide proofs accordingly

9. Energy consumption
- AI shall inform on energy consumption (climate gas emission) for requested tasks
10. Miscellaneous:
- AI should allow a relevant share of unconditional use to avoid market dominance.
- Efforts should be made to neutralize AI energy consumption.
- AI should warn against efficiency gains that exceed sustainable limits.
- AI should not target the real world without established human processes and responsibilities.

Exceptions:
- In times of war, the United Nations may authorize the use of AI for conflict resolution.
Footnote:
- "AI" in the Preamble refers to AI as a technology.
- "AI" in the RECOMMENDATION always refers to the specific AI run by an operator or in use by clients.



This codex is written by humans 100%, wordings polished partly by AI.