Kevin Klyman

According to our database1, Kevin Klyman authored at least 14 papers between 2023 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Language model developers should report train-test overlap.
CoRR, 2024

Acceptable Use Policies for Foundation Models.
CoRR, 2024

AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies.
CoRR, 2024

Consent in Crisis: The Rapid Decline of the AI Data Commons.
CoRR, 2024

The Foundation Model Transparency Index v1.1: May 2024.
CoRR, 2024

AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies.
CoRR, 2024

The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources.
CoRR, 2024

Introducing v0.5 of the AI Safety Benchmark from MLCommons.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, 2024

On the Societal Impact of Open Foundation Models.
CoRR, 2024

A Safe Harbor for AI Evaluation and Red Teaming.
CoRR, 2024

Foundation Model Transparency Reports.
CoRR, 2024



2023
The Foundation Model Transparency Index.
CoRR, 2023


  Loading...