3 Comments

  1. Ok-Promise-1845 on

    Feels like Person of Interest in real life. In the show, Samaritan was the dangerous AI, built to control society through total surveillance, reading cameras, phones, bank data, and internet traffic while predicting human behavior so it could manipulate events before people even knew. That’s the kind of system governments fear rivals building, and secretly want for themselves. The Machine was different, more restrained, designed to quietly protect people, intervene only when necessary, and preserve human freedom. If you compare today’s AI styles, Samaritan represents the cold state-power version, mass surveillance plus hard decisions, while something like Anthropic and its Claude image feels closer to The Machine, more cautious, safety-focused, less aggressive, and more human-aligned. So when the Pentagon picks OpenAI, Google, Microsoft, and Nvidia but leaves Anthropic out, people joke they chose to fund Samaritan instead of The Machine.

  2. HI-HIHI-HOHO on

    Of course, this is just speculation and I just wanted to put the idea out there. You can have a guess which models are taking which approach. With one of these approaches, the line of responsibility could shift more quickly.

    * **RLHF:** Humans review and rank model outputs. A reward model is trained on this feedback, and the AI is optimized to produce responses that humans prefer. → Alignment comes directly from **human judgments**.
    * **Constitutional AI:** The model is guided by a set of explicit principles (a “constitution”). It critiques and revises its own outputs based on these rules, often with less ongoing human input. → Alignment comes from **rule-based self-evaluation**.

Leave A Reply