close
Technology

Microsoft report claims. US opponents are preparing for an AI war

In a new briefing released this week, software giant Microsoft argues that US adversaries such as Iran, Russia, and North Korea are poised to ramp up their cyberwar operations with modern generative AI. The problem is exacerbated, it argues, by a persistent shortage of experienced cybersecurity professionals. According to the briefing, a 2023 ISC2 Cybersecurity Workforce Study estimates that almost 4 million additional support people will be required to deal with the impending attack. Microsoft’s own tests in 2023 revealed a significant increase in password attacks over two years, from 579 per second to more than 4000 per second.

The company’s answer has been the launch of CoPilot For Security. This AI solution is intended to detect, identify, and stop these threats faster and more efficiently than humans. For example, a recent test found that using generative AI enabled security analysts of all levels of competence to function 44% more accurately and 26% faster when dealing with all types of threats. Eighty-six percent claimed AI increased their productivity and lowered the work required to perform their tasks.

Unfortunately, as the corporation admits, the application of AI is not limited to nice guys. The tremendous rise in technology is resulting in an arms race, as threat actors seek to use the new instruments to cause as much damage as possible. As a result, this threat briefing has been issued to warn of an impending escalation. The briefing reveals that OpenAI and Microsoft are working together to detect and counter these rogue actors and their techniques when they emerge in force.

Generative AI has had a pervasive impact on cyberattacks. Darktrace researchers discovered a 135% spike in email-based so-called ‘new cyber attacks’ between January and February 2023, coinciding with the broad deployment of ChatGPT. Furthermore, a rise in linguistically complex phishing assaults with an increased number of words, longer sentences, and more punctuation was seen. This resulted in a 52% surge in email account takeover attempts, with attackers posing as the IT team at victims’ firms.

The paper identifies three primary priority areas that are expected to demand growing amounts of AI in the near future. Improved reconnaissance of targets and weaknesses, enhanced malware scripting using advanced AI coders, and assistance with learning and planning. Because of the massive computing resources required, nation states will very probably be among the first to adopt the technology.

Several such cyberthreat outfits are expressly identified. Strontium (aka APT28) is a very active cyber-espionage gang that has been working out of Russia for the past two decades. It goes by a variety of names and is expected to significantly enhance its use of powerful AI capabilities as they become available.

North Korea also has a significant cyber-espionage presence. According to some reports, over 7000 workers have been executing ongoing threat operations against the West for decades, with activity increasing by 300% since 2017. The Velvet Chollima, also known as the Emerald Sleet operation, primarily targets academic and non-governmental organizations. Artificial intelligence is increasingly being utilized to optimize phishing tactics and evaluate vulnerabilities.

The briefing focuses on two more important participants in the global cyberwar arena: Iran and China. These two countries have also been boosting their usage of language learning models (LLMs), largely to identify research opportunities and potential areas of assault. In addition to these geopolitical threats, the Microsoft briefing discusses the rising use of AI in more traditional criminal activities such as ransomware, fraud (particularly the use of voice cloning), email phishing, and general identity manipulation.

As the conflict heats up, we can expect Microsoft and its partners, such as OpenAI, to build an increasingly sophisticated set of tools for threat detection, behavioral analytics, and other techniques of rapidly and decisively identifying attacks.

According to the research, “Microsoft anticipates that AI will evolve social engineering tactics, creating more sophisticated attacks including deepfakes and voice cloning…prevention is key to combating all cyberthreats, whether traditional or AI-enabled.”

Tags : featuredmicrosofttechnology
technovatica.com

The author technovatica.com

Leave a Response