Salesforce is crowdsourcing its AI roadmap — with customers
Salesforce lets its customers lead its product roadmap with the thinking that if one enterprise customer has a problem, the others likely do too.
Stay informed on AI governance, compliance, and regulation news. Curated updates on AI ethics, policy, and enforcement from trusted sources. Updated .
Monitoring 7329+ articles from 21+ trusted sources including MIT Technology Review, TechCrunch, The Verge, and AI News in 2026.
Randy New is the founder and editor of AI Governance Watch. He is a FinTech executive with over 30 years of experience in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy specializes in cybersecurity intelligence and AI governance.
Randy also publishes Cyber Security Wire and Human vs AI. Learn more about AI Governance Watch and its mission.
AI Governance Watch is a curated news platform that aggregates AI governance, compliance, and regulation news from over 21 trusted sources. It helps professionals track AI policy developments worldwide.
Sources include MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications. As of 2026, the platform has aggregated 7329+ articles across six categories.
Articles are automatically categorized into six areas: regulation, policy, ethics, compliance, enforcement, and general AI news. Each category focuses on a specific aspect of AI governance.
Recently curated articles on AI regulation, policy, and compliance:
Salesforce lets its customers lead its product roadmap with the thinking that if one enterprise customer has a problem, the others likely do too.
Sony and Bose make best-in-class headphones, but my extended time with the latest flagship models reveals their true strengths and weaknesses.
X is rolling out a rebuilt ads platform powered by AI as it works to grow revenue again.
Despite only having one face, I made testing work. I'm currently wearing a pair of smart glasses called the Even Realities G2. Another two pairs, from Rokid, sit on my desk. A few feet away, I've got the Meta Ray-Ban Display charging alongside their Neural Wristband. In my closet are six pairs of $50 smart sunnies that an overzealous Walmart rep sent me. Those sit next to some Xreal, RayNeo, and Lucyd glasses, plus an old pair of Razer Anzu. Later, I'm calling my optician because I'm hoping to
Native AI vendors are popping up in the marketing world.
<h4>Lock-in worries threaten to dampen the E7 launch party</h4> <p>The Coalition for Fair Software Licensing has published research showing that US workers reckon Microsoft is using its productivity tools to lock their employers into the company's AI services.…</p>
ChatGPT analyzed two of my apps, flagged issues, and generated new mockups. It's a game-changer.
Register now for Qlik’s annual Public Sector Summit, taking place May 19 in Washington, D.C.
I struggled with Wi-Fi dead spots for years. Here's what finally worked.
OpenAI is opening up about its goblin problem. After a report from Wired revealed instructions to OpenAI's coding model to "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures," the AI startup published an explanation on its website, calling references to the creatures a "strange habit" its models developed as a result of their training. As outlined in the blog post, OpenAI began noticing metaphors referencing goblins and other creatures starting w
<h4>Concerns over new rules might stop customers from adopting innovations – including AI – that connect to SAP systems</h4> <p>An influential SAP user group has criticized the vendor's API policy update, saying it lacks clarity and potentially prevents users from starting new projects and innovating on their SAP platforms.…</p>
The AI firm said that unlike previous model bugs, this issue "crept in subtly".
Spotify is launching a new verification program to combat spam, fakes, and AI. Some artists will now have a "Verified by Spotify" badge and a green checkmark on their profile, indicating that the company has confirmed a real person is behind the music and the profile. At least at launch, Spotify says that AI personas or profiles that primarily upload AI-generated music are not eligible for the verification program. It did leave the door open to the possibility in the future, though, saying, "the
Samsung and Google make some of the best Android phones, but they're very different. Here's when to choose a Galaxy or a Pixel.
<h4>But why did those fans go away in the first place, Satya?</h4> <p>Microsoft boss Satya Nadella told investors during an earnings call last night that the company needs to "win back" its fans.…</p>
<h4>AI boom splits between companies hoarding eyeballs and those actually charging for them</h4> <p>Anthropic is pulling in more LLM revenue than OpenAI, despite having a fraction of the users.…</p>
By charging my iPhone in this one spot, I was damaging my battery and shortening its lifespan. Here's what I do now to avoid it.
Meta said over 8 billion advertisers have used at least one of its gen AI tools
Meta is planning to pump billions more into AI investments this year, despite noting that millions of users have seemingly started to abandon its platforms. In an earning call on Wednesday, Meta reported that figures for "Family daily active people" - the term Meta has coined for all collective users of Facebook, Instagram, WhatsApp, or Messenger - declined by 20 million this quarter compared to the previous three months. Meta attributes this fall to "internet disruptions in Iran, as well as a r
OpenAI is preparing to launch a new frontier cybersecurity model, GPT-5.5-Cyber. CEO Sam Altman said the model will not be available to the general public, but will be first rolled out to a select group of trusted "cyber defenders" in order for institutions to shore up their cyberdefenses. The limited rollout will take place "in the next few days," Altman said on X. "We will work with the entire ecosystem and the government to figure out trusted access for Cyber." It's not clear who will get ac
AI governance is the set of rules, policies, and frameworks that ensure artificial intelligence is developed and used responsibly. It covers ethical guidelines, compliance standards, and oversight mechanisms to keep AI safe, fair, and accountable.
The EU AI Act requires businesses to classify their AI systems by risk level and meet specific obligations. High-risk systems need conformity assessments, technical documentation, and human oversight. Non-compliance can result in fines up to €35 million or 7% of global turnover.
The NIST AI RMF is a voluntary U.S. framework that helps organizations identify, assess, and mitigate AI-related risks. It is built around four core functions: Govern, Map, Measure, and Manage.
AI compliance is critical because governments worldwide are actively enforcing AI regulations. The EU AI Act carries heavy fines, the U.S. has expanded federal AI oversight, and countries like Canada, Brazil, and China have enacted AI-specific laws. Non-compliance risks penalties, reputational harm, and operational disruption.
The key AI ethics principles are fairness, transparency, accountability, privacy, safety, human oversight, and inclusiveness. These principles are reflected in major frameworks including the OECD AI Principles and the EU Ethics Guidelines for Trustworthy AI.
Organizations implement AI risk management by creating governance structures, running impact assessments, testing for bias, monitoring model performance, and documenting decisions. The NIST AI RMF and ISO/IEC 42001 provide standardized approaches for this process.
Major AI regulations include the EU AI Act, U.S. Executive Orders on AI Safety, Canada's AIDA, South Korea's AI Basic Act, China's Generative AI rules, Brazil's AI framework, and Japan's AI guidelines. Over 60 countries have enacted or proposed AI-specific regulations.
An AI impact assessment is a structured evaluation of how an AI system may affect individuals and society. It examines risks such as bias, privacy violations, and safety concerns. The EU AI Act requires mandatory impact assessments for all high-risk AI systems.
ISO/IEC 42001 is the international standard for AI management systems. It provides a certification framework that helps organizations establish, implement, and improve their AI governance practices in a structured and auditable way.
The AI Bill of Rights is a White House blueprint outlining five principles to protect Americans from AI harms: safe and effective systems, freedom from algorithmic discrimination, data privacy, notice and explanation, and human alternatives and fallback options.
AI Governance Watch aggregates news from over 21 trusted sources including MIT Technology Review, TechCrunch, and The Verge. Articles are automatically categorized into topics like regulation, policy, ethics, compliance, and enforcement to help professionals track AI governance developments.
Algorithmic bias occurs when an AI system produces systematically unfair outcomes due to flawed data or design assumptions. It can lead to discrimination based on race, gender, or other protected characteristics. Detecting and mitigating bias is a core requirement of most AI governance frameworks.
The key AI governance frameworks are the EU AI Act, NIST AI RMF, OECD AI Principles, ISO/IEC 42001, the AI Bill of Rights, and Canada's AIDA. These frameworks set rules for AI risk management, compliance, and ethical use.
| Framework | Region | Status | Focus |
|---|---|---|---|
| EU AI Act | European Union | In Force | Risk-based AI regulation with tiered requirements |
| NIST AI RMF | United States | Active | Voluntary risk management framework (Govern, Map, Measure, Manage) |
| OECD AI Principles | International | Active | International guidelines for trustworthy AI |
| ISO/IEC 42001 | International | Published | AI management system certification standard |
| AI Bill of Rights | United States | Published | Blueprint for protecting civil rights in AI era |
| Canada AIDA | Canada | In Progress | Artificial Intelligence and Data Act |
According to Stanford HAI's AI Index Report, over 60 countries have enacted or proposed AI-specific regulations as of 2026. The trend is toward mandatory compliance requirements rather than voluntary guidelines.
AI Governance Watch was founded by Randy New, a FinTech executive with over 30 years of leadership in infrastructure, cybersecurity, M&A integration, and regulatory compliance. Randy operates at the intersection of financial technology and emerging risk disciplines, with a particular focus on cybersecurity intelligence and AI governance.
Randy New also publishes Cyber Security Wire (cybersecurities.pro) and Human vs AI (humanvsai.tech). AI Governance Watch curates and aggregates AI governance news from authoritative sources including MIT Technology Review, TechCrunch, The Verge, and specialized AI policy publications.
For more information, visit our contact page or subscribe to our newsletter for daily or weekly updates.
"AI technologies can provide substantial benefits, but also pose risks. A responsible approach to AI requires both innovation and guardrails."
"AI actors should respect the rule of law, human rights, democratic values, and diversity, and should implement appropriate safeguards to ensure a fair and just society."
"Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public."
"Artificial intelligence should be a tool for people and be a force for good in society, with the ultimate aim of increasing human well-being."
"The number of AI-related regulations has increased sharply in recent years. In 2023 alone, there were 25 AI-related regulations enacted in the U.S., a significant increase from just one in 2016."
"AI systems must not be used for social scoring or mass surveillance purposes. Member States should ensure that AI systems do not undermine human dignity."