Loading lesson…
What must a lab tell the public or regulators about a model before shipping it? The answer used to be 'nothing.' It is becoming more.
Until ~2023, what a lab disclosed about a new model was entirely voluntary. You got a blog post, maybe a technical report. Weights, training data, evaluation methodology, and safety testing were proprietary. That norm is changing, unevenly.
Companies that publish the most detailed safety information tend to be the ones doing the most safety work. That correlation could reverse if disclosure becomes mandatory and checkbox-shaped.
— Rishi Bommasani, Stanford CRFM (paraphrased)
The big idea: disclosure is the first step toward accountability. Accountability requires someone qualified to read the disclosure and act on it. Both are still under construction.
15 questions · take it digitally for instant feedback at tendril.neural-forge.io/learn/quiz/end-safety2-model-disclosure-creators
Before approximately 2023, what was the standard practice for AI labs regarding disclosure about their models?
According to current EU AI Act requirements, which of the following must general-purpose AI model providers publish?
What additional obligations does the EU AI Act impose on providers of models designated as systemic-risk models?
What did California SB 53 (signed in 2025) require from developers of frontier AI models?
What type of information did the White House voluntary commitments from 2023 ask major AI labs to provide?
What are model cards and system cards designed to communicate about an AI model?
Why does the lesson argue that transparency alone is insufficient for AI safety?
The lesson quotes an observation that disclosure might become what if it becomes mandatory but superficial?
Which entity developed the concept of Model Cards for Model Reporting in 2018?
What is the relationship between transparency and accountability in AI, as described in the lesson?
What type of information does the lesson say is 'summarized at best, rarely reproducible' in current model disclosures?
What is the current status of the UK AI Safety Institute's evaluation access requirements?
Which of the following is identified in the lesson as a subject of active litigation related to AI development?
What quality variation does the lesson note about model cards and system cards across different releases?
What category of information does the lesson say is necessary but not sufficient for AI safety?