Welcome to the NexaQuanta Weekly Newsletter
In this NexaQuanta weekly newsletter, we bring you the latest insights and breakthroughs in AI, cybersecurity, and technology.
Our goal is to keep you informed about the most important developments shaping the future of tech and business, helping you stay ahead in a fast-evolving landscape.
This week, we cover five major updates. IBM’s latest report highlights the growing AI security gap and the rising costs of data breaches.
IBM and NASA have launched Surya, an open-source AI model to predict solar weather and protect critical technology.
Microsoft reveals context-aware AI features for Windows 11, enabling multimodal interactions and smarter computing.
Google Cloud introduces advanced AI security capabilities to safeguard enterprise AI projects.
Finally, OpenAI’s ChatGPT agent can now control PCs to perform complex tasks, offering unprecedented automation while raising new safety considerations.
Global Data Breach Costs Show AI Risks in 2025
New research from IBM and the Ponemon Institute highlights the growing gap between AI adoption and security governance. Companies that adopt AI rapidly without proper oversight face higher risks and more costly breaches.
The report shows that the global average cost of a data breach is $4.4 million, down 9% from last year due to faster breach identification and containment.
However, 97% of organizations that experienced an AI-related security incident lacked proper AI access controls. Additionally, 63% of organizations have no AI governance policies to prevent shadow AI risks.
Organizations that extensively use AI in security achieved cost savings of $1.9 million, compared to those that did not. Experts recommend fortifying identity security for both humans and machines.
Strong operational controls for non-human identities and modern, phishing-resistant methods like passkeys can significantly reduce credential abuse risks.
For deeper insights, IBM cybersecurity experts Jeff Crume and Suja Viswesan discuss key takeaways and strategies to limit AI and data risks in the 2025 Cost of a Data Breach Report.
IBM and NASA Launch AI Mode
New AI Model Surya
IBM and NASA have unveiled Surya, an advanced open-source AI foundation model designed to predict solar activity and its impact on Earth and space-based technology. Surya is trained on high-resolution solar observation data that is openly available on Hugging Face, allowing global researchers to access and build upon it.
Protecting Technology from Solar Storms
Solar flares and coronal mass ejections can disrupt satellites, GPS, power grids, and telecommunications. With increased reliance on space technology, accurate solar weather forecasts have become critical. Surya provides tools to help experts plan and safeguard technological infrastructure.
Improved Forecast Accuracy
Trained on nine years of high-resolution data from NASA’s Solar Dynamics Observatory, Surya improves solar flare classification accuracy by 16 percent. It can visually predict where a flare will occur up to two hours in advance, offering unprecedented spatial resolution.
Democratizing Science and Research
By releasing Surya on Hugging Face, IBM and NASA are making advanced AI tools available to the global research community. Researchers can use the model to study solar behavior, develop specialized applications, and enhance preparedness for solar events.
Part of Broader Collaboration
Surya is part of IBM and NASA’s ongoing AI efforts, including the Prithvi family of foundation models for geospatial and weather forecasting. The collaboration aims to advance data-driven science and empower AI as a tool for global scientific discovery.
Want to read more about this? Click here.
Microsoft Unveils Context-Aware AI Features for Windows 11
AI-Driven Windows 11
Microsoft confirms that the future of Windows 11 will be centered on AI with context-awareness. The OS will understand user intent by combining voice, vision, pen, touch, and screen interactions. Microsoft has no plans to discuss Windows 12 yet, focusing instead on evolving Windows 11.
Settings AI Agent and Local Models
The August 2025 Windows 11 Update introduces an AI-powered Settings app. Using the local AI model ‘Mu,’ the search bar can understand user intent even from unclear queries. Other local models like Phi are integrated into Edge and system features, providing AI capabilities directly on the device.
Multimodal Interaction
Windows 11 is moving beyond the mouse and keyboard. Vision-based features allow the OS to “see” the screen and anticipate user actions. For example, the AI can read, summarize, or edit PDF documents depending on user needs. Context-awareness is a key focus for future updates.
Copilot+ PCs and Hardware Requirements
These AI features will be limited to Copilot+ PCs equipped with NPUs. Regular PCs without this hardware will not support the AI capabilities. Microsoft aims to offer differentiated value through the integration of AI in Windows and Microsoft 365 Copilot.
AI as the Future of Windows
Microsoft emphasizes that AI will drive the next phase of Windows, blending device and cloud capabilities. The company continues to focus on evolving Windows 11 with AI features while gradually upgrading users from Windows 10.
[Learn more about Windows 11 AI features]
Google Cloud Launches Advanced AI Security Features
Securing AI Projects
Google Cloud has introduced new security capabilities to help organizations protect AI initiatives and strengthen overall cybersecurity. The announcements were made during the virtual Google Cloud Security Summit 2025.
AI Adoption and Challenges
According to Jon Ramsey, VP of Google Cloud Security, 91% of organizations have started AI projects. However, 74% struggle to move beyond experimentation. Security remains the top concern for developers and data scientists implementing AI at scale.
New Security Features
Google Cloud’s AI security updates include automated discovery of AI agents and model context protocol servers to detect vulnerabilities. Model Armor now protects Agent prompts and responses against runtime threats like prompt injection and sensitive data leakage. New threat detections use intelligence from Mandiant and Google to spot anomalous and risky behaviors.
Agentic Security Operations
The Alert Investigation agent, now in preview, allows organizations to autonomously analyze events and command-line activities. Mandiant AI Consulting offers risk-based AI governance, pre-deployment guidance, and AI threat modeling to ensure safe AI adoption.
Empowering Organizations
These updates aim to make security an enabler for AI adoption. Google Cloud helps organizations automate compliance, simplify access management, and protect AI workloads from development to deployment, enabling safer and more effective AI transformations.
[Learn more about Google Cloud AI security]
OpenAI Launches ChatGPT Agent to Control PCs
A Smarter AI Assistant
OpenAI has introduced ChatGPT agent, an upgraded version of its AI chatbot. It comes with a virtual computer and integrated toolkit, enabling it to carry out complex, multi-step tasks on a user’s PC. Users can now command the agent to analyze data, manage files, or even plan and purchase items.
Enhanced Capabilities
ChatGPT agent combines three components: Operator, which browses the web; Deep Research, which synthesizes large data sets; and the conversational skills of previous ChatGPT versions. It has shown major improvements in AI benchmarks, doubling accuracy in expert-level tests and outperforming prior models in math and reasoning challenges.
Limitations and Supervision
The agent still depends on human supervision. It struggles with spatial reasoning and lacks persistent memory, processing information only in the moment. Built-in safeguards, including permission prompts and interruptibility, are essential but cannot eliminate all risks.
Potential Risks
OpenAI acknowledges that the agent’s increased autonomy could be misused. With its virtual computer, it could interact with files, websites, and online tools, raising risks of data breaches, fraud, or misuse in sensitive areas. Experts warn that AI agents can amplify errors, introduce biases, and complicate liability.
Safety Measures
OpenAI has strengthened safeguards through threat modeling, dual-use refusal training, bug bounty programs, and expert red-teaming. However, external assessments like SaferAI and the AI Safety Index have rated OpenAI’s risk management as only moderate, highlighting the need for continued vigilance.
[Learn more about ChatGPT agent]
Stay Updated with NexaQuanta
Subscribe to NexaQuanta weekly newsletter to get curated updates on AI, technology, and cybersecurity directly to your inbox. Stay informed, make smarter decisions, and never miss a key development in the world of innovation.