OpenAI faces criminal probe over role of ChatGPT in shooting
The firm, co-founded by Sam Altman, said it is "not responsible" for the attack at Florida State University
Stay informed on AI governance, compliance, and regulation news. Curated updates on AI ethics, policy, and enforcement from trusted sources. Updated .
Monitoring 6858+ articles from 21+ trusted sources including MIT Technology Review, TechCrunch, The Verge, and AI News in 2026.
Randy New is the founder and editor of AI Governance Watch. He is a FinTech executive with over 30 years of experience in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy specializes in cybersecurity intelligence and AI governance.
Randy also publishes Cyber Security Wire and Human vs AI. Learn more about AI Governance Watch and its mission.
AI Governance Watch is a curated news platform that aggregates AI governance, compliance, and regulation news from over 21 trusted sources. It helps professionals track AI policy developments worldwide.
Sources include MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications. As of 2026, the platform has aggregated 6858+ articles across six categories.
Articles are automatically categorized into six areas: regulation, policy, ethics, compliance, enforcement, and general AI news. Each category focuses on a specific aspect of AI governance.
Recently curated articles on AI regulation, policy, and compliance:
The firm, co-founded by Sam Altman, said it is "not responsible" for the attack at Florida State University
Founded by an OSU researcher, the startup is developing AI agents that can become experts in any domain.
The move is part of VW’s broader automotive AI strategy.
OpenAI's new ChatGPT Images 2.0 promises precision and design control. Here's how to try it for yourself.
Ask Americans how they feel about AI and most say they have concerns. Communities have mounted resistance to data center projects, stalling them across the US. On social media, anger at AI companies and executives is unrestrained - sometimes to the point of condoning violence. But look at the issues that most campaigns are focused on, and AI is far less prevalent, experts say. More than 60 percent of both Republicans and Democrats polled by Ipsos earlier this year agree that the government shoul
The ChatGPT Images 2.0 model is here. Our testing shows it's better at creating more detailed images and rendering text, but it still struggles with languages other than English.
An image generated by ChatGPT Images 2.0. | Image: OpenAI OpenAI is rolling out the latest version of its AI-powered image generator with new "thinking capabilities," allowing it to search the web to help it create multiple images from a single prompt. In a blog post, OpenAI says ChatGPT Images 2.0 can now create more "sophisticated" images, with improvements to its ability to follow instructions, preserve details of your choosing, and generate text. It's powered by OpenAI's new GPT Image 2 mo
ChatGPT Images 2.0, the newest image generation model from OpenAI, shows just how much AI capabilities have evolved over the last few years.
All three are related to smart home connectivity, and it's important that you understand their differences.
The AI industry's mudslinging continues.
The Firefox team doesn’t think emerging AI capabilities will upend cybersecurity long term, but they warn that software developers are likely in for a rocky transition.
Six years of user feedback have led to the new Framework Laptop 13 Pro, a thin and light device that's both modular and premium.
Remember when Framework made the first laptop where you can easily upgrade its entire internal video card in three minutes flat? The company's getting into the external graphics game, too. As promised last August, you'll be able to turn the Framework Laptop 16's GPU modules into external ones instead. Or, you can plug in a desktop graphics card (or network card, or other PCIe cards) for more power than most laptops ever dream of having, with eight lanes of PCI-Express bandwidth. Framework's cal
McKinsey identifies four coordinated steps that connect strategy, technology, and people to build strong foundational data capabilities.
The partners aim to create the core infrastructure for next-generation robotic systems.
The soon-to-exit Apple CEO went all in on services. Now, the incoming CEO, John Ternus, will need to embrace the AI era.
There's a lot to be excited about with Apple's next chief executive officer, even if you care more about software.
YouTube is expanding its AI deepfake monitoring feature to Hollywood - meaning some celebrity AI videos could soon disappear. The platform's likeness detection feature searches YouTube for AI deepfake content and flags it for public figures enrolled in the program. Public figures can use it to keep track of AI content on YouTube of themselves or request removal (takedowns are evaluated against YouTube's privacy policy, and not every request will be approved). YouTube began testing the feature wi
Motorola is offering a deal on its latest Moto G phone that includes a Moto Tag 4-pack and a pair of Moto Buds Plus earbuds, free. Here's what to know.
As AI agents increasingly work alongside humans across organizations, companies could be inadvertently opening a new attack surface. Insecure agents can be manipulated to access sensitive systems and proprietary data, increasing enterprise risk. In some modern enterprises, non-human identities (NHI) are outpacing human identities, and that trend will explode with agentic AI. Solid governance and…
AI governance is the set of rules, policies, and frameworks that ensure artificial intelligence is developed and used responsibly. It covers ethical guidelines, compliance standards, and oversight mechanisms to keep AI safe, fair, and accountable.
The EU AI Act requires businesses to classify their AI systems by risk level and meet specific obligations. High-risk systems need conformity assessments, technical documentation, and human oversight. Non-compliance can result in fines up to €35 million or 7% of global turnover.
The NIST AI RMF is a voluntary U.S. framework that helps organizations identify, assess, and mitigate AI-related risks. It is built around four core functions: Govern, Map, Measure, and Manage.
AI compliance is critical because governments worldwide are actively enforcing AI regulations. The EU AI Act carries heavy fines, the U.S. has expanded federal AI oversight, and countries like Canada, Brazil, and China have enacted AI-specific laws. Non-compliance risks penalties, reputational harm, and operational disruption.
The key AI ethics principles are fairness, transparency, accountability, privacy, safety, human oversight, and inclusiveness. These principles are reflected in major frameworks including the OECD AI Principles and the EU Ethics Guidelines for Trustworthy AI.
Organizations implement AI risk management by creating governance structures, running impact assessments, testing for bias, monitoring model performance, and documenting decisions. The NIST AI RMF and ISO/IEC 42001 provide standardized approaches for this process.
Major AI regulations include the EU AI Act, U.S. Executive Orders on AI Safety, Canada's AIDA, South Korea's AI Basic Act, China's Generative AI rules, Brazil's AI framework, and Japan's AI guidelines. Over 60 countries have enacted or proposed AI-specific regulations.
An AI impact assessment is a structured evaluation of how an AI system may affect individuals and society. It examines risks such as bias, privacy violations, and safety concerns. The EU AI Act requires mandatory impact assessments for all high-risk AI systems.
ISO/IEC 42001 is the international standard for AI management systems. It provides a certification framework that helps organizations establish, implement, and improve their AI governance practices in a structured and auditable way.
The AI Bill of Rights is a White House blueprint outlining five principles to protect Americans from AI harms: safe and effective systems, freedom from algorithmic discrimination, data privacy, notice and explanation, and human alternatives and fallback options.
AI Governance Watch aggregates news from over 21 trusted sources including MIT Technology Review, TechCrunch, and The Verge. Articles are automatically categorized into topics like regulation, policy, ethics, compliance, and enforcement to help professionals track AI governance developments.
Algorithmic bias occurs when an AI system produces systematically unfair outcomes due to flawed data or design assumptions. It can lead to discrimination based on race, gender, or other protected characteristics. Detecting and mitigating bias is a core requirement of most AI governance frameworks.
The key AI governance frameworks are the EU AI Act, NIST AI RMF, OECD AI Principles, ISO/IEC 42001, the AI Bill of Rights, and Canada's AIDA. These frameworks set rules for AI risk management, compliance, and ethical use.
| Framework | Region | Status | Focus |
|---|---|---|---|
| EU AI Act | European Union | In Force | Risk-based AI regulation with tiered requirements |
| NIST AI RMF | United States | Active | Voluntary risk management framework (Govern, Map, Measure, Manage) |
| OECD AI Principles | International | Active | International guidelines for trustworthy AI |
| ISO/IEC 42001 | International | Published | AI management system certification standard |
| AI Bill of Rights | United States | Published | Blueprint for protecting civil rights in AI era |
| Canada AIDA | Canada | In Progress | Artificial Intelligence and Data Act |
According to Stanford HAI's AI Index Report, over 60 countries have enacted or proposed AI-specific regulations as of 2026. The trend is toward mandatory compliance requirements rather than voluntary guidelines.
AI Governance Watch was founded by Randy New, a FinTech executive with over 30 years of leadership in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy operates at the intersection of financial technology and emerging risk disciplines, with a particular focus on cybersecurity intelligence and AI governance.
Randy New also publishes Cyber Security Wire (cybersecurities.pro) and Human vs AI (humanvsai.tech). AI Governance Watch curates and aggregates AI governance news from authoritative sources including MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications.
For more information, visit our contact page or subscribe to our newsletter for daily or weekly updates.
"AI technologies can provide substantial benefits, but also pose risks. A responsible approach to AI requires both innovation and guardrails."
"AI actors should respect the rule of law, human rights, democratic values, and diversity, and should implement appropriate safeguards to ensure a fair and just society."
"Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public."
"Artificial intelligence should be a tool for people and be a force for good in society, with the ultimate aim of increasing human well-being."
"The number of AI-related regulations has increased sharply in recent years. In 2023 alone, there were 25 AI-related regulations enacted in the U.S., a significant increase from just one in 2016."
"AI systems must not be used for social scoring or mass surveillance purposes. Member States should ensure that AI systems do not undermine human dignity."