Ezra Klein also wrote an article about this where he wrote “[developing AI] is an act of summoning. The coders casting these spells have no idea what will stumble through the portal… They are calling anyway.”. This for me captures what the main risk here is: We are not aware of the scenarios that could play out. Only time can tell.
I don't see genuine commitment to AI ethics unless it's built into the model. It can't require perpetual RLHF, moderation, data labelling, fact-checking and guard rails. 'Alignment' and 'safety' are constructs borne out of Generative AI without 'understanding'.
Thanks for writing about this Jurgen because the hype obscures what is a flaw in the model.
Much appreciated! What bothers me most is that the player and the referee are both the same party. To OpenAI, alignment is when their models' output score good enough on safety benchmarks, but why is it that they get to say how AI alignment should be understood?
The problem of benchmarking is a systemic one for today's AI. Benchmarks are arbitrary and based on what the technology can do, rather than scientific towards a goal.
It has been, and is, hotly debated by academics (the ones in universities) as opposed to the pseudo-science that occurs in the tech giants with agendas to double down on what they *can* do rather than what is best - best for society's progress and learning more about the brain for medical benefit at the same time as pushing forward with better tools (machines) to enhance our daily lives.
I agree Jurgen about the player and the referee. The tech giants have literally come up with their own benchmarks that they pass with stella results. I dont know if you follow University of Washington Professor, Emily Bender but she spends a huge amount of time trying to balance the hype in the media with science. It's a shame such time is consumed by these academics. The opportunity cost of better science, better solutions is escalating along with the megawatts of power to support today's AI.
Thank you Jurgen for keeping your finger on the AI pulse. I just spent over an hour responding to comments on another thread about ChatGPT 4. Different topic, same concerns. For the corporates, it's a race to the top—or bottom, depending on your point of view. I see little evidence of real concern for any actual ethics or human values.
Ezra Klein also wrote an article about this where he wrote “[developing AI] is an act of summoning. The coders casting these spells have no idea what will stumble through the portal… They are calling anyway.”. This for me captures what the main risk here is: We are not aware of the scenarios that could play out. Only time can tell.
A great metaphor! Prompt engineering feels a bit like a dark art
I don't see genuine commitment to AI ethics unless it's built into the model. It can't require perpetual RLHF, moderation, data labelling, fact-checking and guard rails. 'Alignment' and 'safety' are constructs borne out of Generative AI without 'understanding'.
Thanks for writing about this Jurgen because the hype obscures what is a flaw in the model.
Much appreciated! What bothers me most is that the player and the referee are both the same party. To OpenAI, alignment is when their models' output score good enough on safety benchmarks, but why is it that they get to say how AI alignment should be understood?
The problem of benchmarking is a systemic one for today's AI. Benchmarks are arbitrary and based on what the technology can do, rather than scientific towards a goal.
It has been, and is, hotly debated by academics (the ones in universities) as opposed to the pseudo-science that occurs in the tech giants with agendas to double down on what they *can* do rather than what is best - best for society's progress and learning more about the brain for medical benefit at the same time as pushing forward with better tools (machines) to enhance our daily lives.
I agree Jurgen about the player and the referee. The tech giants have literally come up with their own benchmarks that they pass with stella results. I dont know if you follow University of Washington Professor, Emily Bender but she spends a huge amount of time trying to balance the hype in the media with science. It's a shame such time is consumed by these academics. The opportunity cost of better science, better solutions is escalating along with the megawatts of power to support today's AI.
Thank you Jurgen for keeping your finger on the AI pulse. I just spent over an hour responding to comments on another thread about ChatGPT 4. Different topic, same concerns. For the corporates, it's a race to the top—or bottom, depending on your point of view. I see little evidence of real concern for any actual ethics or human values.
Agree, Microsoft was happy to discard their AI ethics in order to be first-to-market with the New Bing. We all know how that went down.