Sunday Paper - A Right to Warn about Advanced Artificial Intelligence

Former Google DeepMind ane OpenAI employees demand transparency and accountability from AI companies on risk issues.

AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.

AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. – A Right to Warn about Advanced Artificial Intelligence

Tip of the hat to Daring Fireball.

Author: Khürt Williams

I work in application security architecture and I live in Montgomery Township, New Jersey with my wife Bhavna. I am passionate about photography. Expect to find writing on cybersecurity, tropical aquariums, terrariums, hiking, craft breweries, and bird photography.

I want to hear from you. Leave a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Feel free to comment on this story directly above, but you can also go to links/summaries posted to social media, and reply to or comment on them there.

IndieWebCamp: To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Learn More)