When OpenAI was created, their mission was clear from the start, when developing highly-capable AI systems they need to be aligned with human values. On their blog, it says: We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alignment problems.
Ezra Klein also wrote an article about this where he wrote “[developing AI] is an act of summoning. The coders casting these spells have no idea what will stumble through the portal… They are calling anyway.”. This for me captures what the main risk here is: We are not aware of the scenarios that could play out. Only time can tell.
I don't see genuine commitment to AI ethics unless it's built into the model. It can't require perpetual RLHF, moderation, data labelling, fact-checking and guard rails. 'Alignment' and 'safety' are constructs borne out of Generative AI without 'understanding'.
Thanks for writing about this Jurgen because the hype obscures what is a flaw in the model.
Thank you Jurgen for keeping your finger on the AI pulse. I just spent over an hour responding to comments on another thread about ChatGPT 4. Different topic, same concerns. For the corporates, it's a race to the top—or bottom, depending on your point of view. I see little evidence of real concern for any actual ethics or human values.
Ezra Klein also wrote an article about this where he wrote “[developing AI] is an act of summoning. The coders casting these spells have no idea what will stumble through the portal… They are calling anyway.”. This for me captures what the main risk here is: We are not aware of the scenarios that could play out. Only time can tell.
I don't see genuine commitment to AI ethics unless it's built into the model. It can't require perpetual RLHF, moderation, data labelling, fact-checking and guard rails. 'Alignment' and 'safety' are constructs borne out of Generative AI without 'understanding'.
Thanks for writing about this Jurgen because the hype obscures what is a flaw in the model.
Thank you Jurgen for keeping your finger on the AI pulse. I just spent over an hour responding to comments on another thread about ChatGPT 4. Different topic, same concerns. For the corporates, it's a race to the top—or bottom, depending on your point of view. I see little evidence of real concern for any actual ethics or human values.