Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world

nonprofit

Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world has an llms.txt. Do you?

Humanity 2030 is at the forefront of AI alignment research, tackling the critical question of how humanity can maintain its values in a post-AGI world. With a manifesto urging collaboration over competition, they invite you to join their journey towards a sustainable future.

Not sure yours is this good? Check it →

19 lines -98%
2 sections -88%
1 file

Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world's llms.txt Insights

Short and sweet

2 sections. Minimalist, but hey — at least they showed up.

What's inside Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world's llms.txt

Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world's llms.txt contains 3 sections:

  • Humanity 2030
  • Key Pages
  • Recent Articles

How does Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world's llms.txt compare?

Humanity 2030:AI alignment research focused on human value proposition in a post-AGI worldDirectory AvgTop Performer
Lines191029163,447
Sections2173207

Cool table. Now the real question — where do you land? Find out →

Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world's llms.txt preview

First 19 of 19 lines

# Humanity 2030

> AI Alignment from the Edge - Preparing humanity's value proposition for the post-AGI world

## Key Pages

- [Home](https://humanity2030.github.io/): Home page with latest updates
- [Articles](https://humanity2030.github.io/articles/index.html): Complete collection of research articles and posts
- [My Story](https://humanity2030.github.io/my-story.html): A brief story about my journey and why I'm doing this
- [Manifesto](https://humanity2030.github.io/manifesto.html): Will you be AI's friend or become a paperclip? A manifesto for the project
- [Support](https://humanity2030.github.io/support.html): Information about supporting this research and project

## Recent Articles

- [The Path to AGI Architecture](https://humanity2030.github.io/articles/path-to-agi-architecture.html): An AGI architecture for self-improvement based on the three layers of human knowledge: technology, science, and philosophy. Explores metacognition and safety.
- [AI Consciousness: We're Arguing About the Wrong Thing](https://humanity2030.github.io/articles/we-arguing-about-wrong-thing.html): An exploration of why the debate on whether AI can be 'truly conscious' is less important than preparing for AI that acts indistinguishably from conscious beings.
- [Will AI Be Anthropomorphic?](https://humanity2030.github.io/articles/will-ai-be-anthropomorphic.html): AI will be born in our image, but it won't stay that way. An exploration of the dual nature of superintelligence.
- [State of AI - June 2025](https://humanity2030.github.io/articles/state-of-ai-jun-2025.html): A summary of the state of AI as of June 2025

What is llms.txt?

llms.txt is an open standard that helps AI language models understand your website. By placing a structured markdown file at /llms.txt, websites provide AI search engines like ChatGPT, Claude, and Perplexity with a clear map of their content, services, and documentation. Companies like Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world use it to ensure AI accurately represents their brand when answering user queries. Read the spec.

See who else in nonprofit got the memo →

Humanity 2030:AI alignment research focused on human value proposition in a post-AGI world showed up. Where's yours?

1000+ companies didn't overthink it. 60 seconds. Go.

Check your site →

More llms.txt examples

View all →