White House memo claims mass AI theft by Chinese firms
A memo from Michael Kratsios says firms, mainly in China, are wrongfully distilling US AI models.
Stay informed on AI governance, compliance, and regulation news. Curated updates on AI ethics, policy, and enforcement from trusted sources. Updated .
Monitoring 7021+ articles from 21+ trusted sources including MIT Technology Review, TechCrunch, The Verge, and AI News in 2026.
Randy New is the founder and editor of AI Governance Watch. He is a FinTech executive with over 30 years of experience in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy specializes in cybersecurity intelligence and AI governance.
Randy also publishes Cyber Security Wire and Human vs AI. Learn more about AI Governance Watch and its mission.
AI Governance Watch is a curated news platform that aggregates AI governance, compliance, and regulation news from over 21 trusted sources. It helps professionals track AI policy developments worldwide.
Sources include MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications. As of 2026, the platform has aggregated 7021+ articles across six categories.
Articles are automatically categorized into six areas: regulation, policy, ethics, compliance, enforcement, and general AI news. Each category focuses on a specific aspect of AI governance.
Recently curated articles on AI regulation, policy, and compliance:
A memo from Michael Kratsios says firms, mainly in China, are wrongfully distilling US AI models.
Claude users can access more apps with Anthropic's AI now thanks to new connectors for everything from hiking to grocery shopping. Anthropic already supported connecting numerous work-related apps to Claude, like Microsoft apps, but this expansion focuses on personal apps like Audible, Spotify, Uber, AllTrails, TripAdvisor, Instacart, TurboTax, and others. Some of these apps, such as Spotify, already have similar connectors in OpenAI's ChatGPT. Once an app is connected, Claude will suggest rele
Visibility is difficult in OT and industrial control system environments. A new NIST cybersecurity project aims to help address those challenges.
Sierra, the AI customer service agent startup founded by technologist Bret Taylor, announced today that it has acquired the YC-backed French startup Fragment.
<h4>Rising refusal rate from Acceptable Use Classifier leaves customers paying for nothing</h4> <p>Anthropic's release last week of Opus 4.7 came with stronger safeguards to prevent misuse. Unfortunately, these safeguards have also managed to thwart legitimate use.…</p>
Before production, consider whether that method is repeatable, a panelist said at the California Public Sector CIO Academy. Others recommended identifying service challenges early, and ensuring leadership is set.
Technically speaking, there's no practical benefit to use PQC. So why is it being used?
The Renpho Eye Massager can help alleviate piercing headaches and migraines - and it's 23% off at Amazon right now.
This Wednesday Donald Trump claimed that out of respect for him, Iran had agreed to spare the lives of 8 young Iranian women facing execution. But he was quickly struck with accusations that the women were AI-generated. Iran gave conflicting information, both mocking the AI claims but also declaring none of the women faced confirmed death sentences. The reality - as confirmed by two NGOs - is that all parties did not share correct information.
Meta is planning to layoff around 10 percent of employees in May, according to a memo from the company's chief people officer, Janelle Gale, published by Bloomberg. That means approximately 8,000 people will see their jobs cut. Meta will also be closing around 6,000 open roles, according to Gale. The cuts follow Meta's significant investments in AI, including spending huge sums to hire top talent and build data centers. The company forecast in January that it will spend $115 billion to $135 bill
Noscroll wants to cure doomscrolling with an AI bot that reads the internet for you.
The impact of quantum computing requires more than technical upgrades — it demands coordination across federal networks.
In this week’s episode of Uncanny Valley, we talk about Tim Cook’s legacy as CEO at Apple and what his long-rumored departure means for the future of one of the world's biggest companies.
The cuts, which employees had been expecting for weeks, will be Meta's largest layoff since 2023.
OpenAI says its latest model offers increased capabilities across a broad variety of categories.
A Palantir post citing CEO Alex Karp's book called for mandatory military service and closer ties between Silicon Valley and the Pentagon while criticizing "hollow pluralism" and warning of a new AI arms race. But Palantir is just one of the tech companies blurring the lines between Silicon Valley and Washington – while growing too big too fast for traditional oversight.
Anthropic's tightly controlled rollout of Claude Mythos has taken an awkward turn. After spending weeks insisting the AI model is so capable at cybersecurity that it is too dangerous to release publicly, it appears the model fell into the wrong hands anyway. According to Bloomberg, a "small group of unauthorized users" has had access to Mythos - whose existence was first revealed in a leak - since the day Anthropic announced plans to offer it to a select group of companies for testing. Anthropi
CS 153 has gone viral on the Palo Alto campus—and on X. Not everyone is happy about it.
New Google Cloud Managed Lustre capabilities with DDN EXAScaler improves AI training, inference, and high-performance computing, delivering scale, performance, and economics LAS VEGAS, April 23, 2026 — DDN has shared new […] The post DDN Expands Google Cloud Managed Lustre for AI and HPC Workloads appeared first on AIwire.
The tech giant is positioning autonomous, long-running agents as the next defining shift in enterprise AI.
AI governance is the set of rules, policies, and frameworks that ensure artificial intelligence is developed and used responsibly. It covers ethical guidelines, compliance standards, and oversight mechanisms to keep AI safe, fair, and accountable.
The EU AI Act requires businesses to classify their AI systems by risk level and meet specific obligations. High-risk systems need conformity assessments, technical documentation, and human oversight. Non-compliance can result in fines up to €35 million or 7% of global turnover.
The NIST AI RMF is a voluntary U.S. framework that helps organizations identify, assess, and mitigate AI-related risks. It is built around four core functions: Govern, Map, Measure, and Manage.
AI compliance is critical because governments worldwide are actively enforcing AI regulations. The EU AI Act carries heavy fines, the U.S. has expanded federal AI oversight, and countries like Canada, Brazil, and China have enacted AI-specific laws. Non-compliance risks penalties, reputational harm, and operational disruption.
The key AI ethics principles are fairness, transparency, accountability, privacy, safety, human oversight, and inclusiveness. These principles are reflected in major frameworks including the OECD AI Principles and the EU Ethics Guidelines for Trustworthy AI.
Organizations implement AI risk management by creating governance structures, running impact assessments, testing for bias, monitoring model performance, and documenting decisions. The NIST AI RMF and ISO/IEC 42001 provide standardized approaches for this process.
Major AI regulations include the EU AI Act, U.S. Executive Orders on AI Safety, Canada's AIDA, South Korea's AI Basic Act, China's Generative AI rules, Brazil's AI framework, and Japan's AI guidelines. Over 60 countries have enacted or proposed AI-specific regulations.
An AI impact assessment is a structured evaluation of how an AI system may affect individuals and society. It examines risks such as bias, privacy violations, and safety concerns. The EU AI Act requires mandatory impact assessments for all high-risk AI systems.
ISO/IEC 42001 is the international standard for AI management systems. It provides a certification framework that helps organizations establish, implement, and improve their AI governance practices in a structured and auditable way.
The AI Bill of Rights is a White House blueprint outlining five principles to protect Americans from AI harms: safe and effective systems, freedom from algorithmic discrimination, data privacy, notice and explanation, and human alternatives and fallback options.
AI Governance Watch aggregates news from over 21 trusted sources including MIT Technology Review, TechCrunch, and The Verge. Articles are automatically categorized into topics like regulation, policy, ethics, compliance, and enforcement to help professionals track AI governance developments.
Algorithmic bias occurs when an AI system produces systematically unfair outcomes due to flawed data or design assumptions. It can lead to discrimination based on race, gender, or other protected characteristics. Detecting and mitigating bias is a core requirement of most AI governance frameworks.
The key AI governance frameworks are the EU AI Act, NIST AI RMF, OECD AI Principles, ISO/IEC 42001, the AI Bill of Rights, and Canada's AIDA. These frameworks set rules for AI risk management, compliance, and ethical use.
| Framework | Region | Status | Focus |
|---|---|---|---|
| EU AI Act | European Union | In Force | Risk-based AI regulation with tiered requirements |
| NIST AI RMF | United States | Active | Voluntary risk management framework (Govern, Map, Measure, Manage) |
| OECD AI Principles | International | Active | International guidelines for trustworthy AI |
| ISO/IEC 42001 | International | Published | AI management system certification standard |
| AI Bill of Rights | United States | Published | Blueprint for protecting civil rights in AI era |
| Canada AIDA | Canada | In Progress | Artificial Intelligence and Data Act |
According to Stanford HAI's AI Index Report, over 60 countries have enacted or proposed AI-specific regulations as of 2026. The trend is toward mandatory compliance requirements rather than voluntary guidelines.
AI Governance Watch was founded by Randy New, a FinTech executive with over 30 years of leadership in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy operates at the intersection of financial technology and emerging risk disciplines, with a particular focus on cybersecurity intelligence and AI governance.
Randy New also publishes Cyber Security Wire (cybersecurities.pro) and Human vs AI (humanvsai.tech). AI Governance Watch curates and aggregates AI governance news from authoritative sources including MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications.
For more information, visit our contact page or subscribe to our newsletter for daily or weekly updates.
"AI technologies can provide substantial benefits, but also pose risks. A responsible approach to AI requires both innovation and guardrails."
"AI actors should respect the rule of law, human rights, democratic values, and diversity, and should implement appropriate safeguards to ensure a fair and just society."
"Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public."
"Artificial intelligence should be a tool for people and be a force for good in society, with the ultimate aim of increasing human well-being."
"The number of AI-related regulations has increased sharply in recent years. In 2023 alone, there were 25 AI-related regulations enacted in the U.S., a significant increase from just one in 2016."
"AI systems must not be used for social scoring or mass surveillance purposes. Member States should ensure that AI systems do not undermine human dignity."