Tag Archive for: Automated

CISA Launches New System for Automated Malware Analysis


The Cybersecurity and Infrastructure Security Agency has unveiled Malware Next-Gen, a new platform designed to provide automated analysis of newly identified malware to support threat detection and response efforts.

Malware Next-Gen works to enable government agencies to submit malware samples and suspicious artifacts for automated analysis to inform their cyber defense initiatives, CISA said Wednesday.

“Our new automated system enables CISA’s cybersecurity threat hunting analysts to better analyze, correlate, enrich data, and share cyber threat insights with partners. It facilitates and supports rapid and effective response to evolving cyber threats, ultimately safeguarding critical systems and infrastructure,” said Eric Goldstein, executive assistant director for cybersecurity at CISA.

Since November, Malware Next-Gen has analyzed over 1,600 files from nearly 400 registered users from defense and civilian agencies and has identified and shared approximately 200 suspicious or malicious files and uniform resource locators.

The Potomac Officers Club will host the 2024 Cyber Summit on June 6 to hear from government and industry experts about the dynamic and ever-evolving role of cyber in the public sector. Register here!

Source…

Researchers Report First Instance of Automated SaaS Ransomware Extortion


The 0mega ransomware group has successfully pulled off an extortion attack against a company’s SharePoint Online environment without needing to use a compromised endpoint, which is how these attacks usually unfold. Instead, the threat group appears to have used a weakly secured administrator account to infiltrate the unnamed company’s environment, elevate permissions, and eventually exfiltrate sensitive data from the victim’s SharePoint libraries. The data was used to extort the victim to pay a ransom.

Likely First of its Kind Attack

The attack merits attention because most enterprise efforts to address the ransomware threat tend to focus on endpoint protection mechanisms, says Glenn Chisholm, cofounder and CPO at Obsidian, the security firm that discovered the attack.

“Companies have been trying to prevent or mitigate ransomware-group attacks entirely through endpoint security investments,” Chisholm says. “This attack shows that endpoint security isn’t enough, as many companies are now storing and accessing data in SaaS applications.”

The attack that Obsidian observed began with an 0mega group actor obtaining a poorly secured service account credential belonging to one of the victim organization’s Microsoft Global administrators. Not only was the breached account accessible from the public Internet, it also did not have multi-factor authentication (MFA) enabled — something that most security experts agree is a basic security necessity, especially for privileged accounts.

The threat actor used the compromised account to create an Active Directory user — somewhat brazenly — called “0mega” and then proceeded to grant the new account all the permissions needed to create havoc in the environment. These included permissions to be a Global Admin, SharePoint Admin, Exchange Admin, and Teams Administrator. For additional good measure, the threat actor used the compromised admin credential to grant the 0mega account with so-called site collection administrator capabilities within the organization’s SharePoint Online environment and to remove all other existing administrators.

In SharePoint-speak, a site collection is a group of websites within a Web application that share administrative…

Source…

The threat of automated hacking, deepfakes and weaponised AI


Vishal Salvi, chief information security officer & head of cyber security practice at Infosys, discusses the threat of automated hacking, deepfakes and weaponised AI Automated hacking, deepfakes and weaponised AI – how much of a threat are they? image

AI has been deployed in a number of ways by threat actors in recent times.

It is a vexing paradox that while emerging cyber technologies provide valuable benefits, their malicious use in the form of automated hacking, deepfakes, and weaponised artificial intelligence (AI), among others, prove a threat. Along with existing threats such as ransomware, botnets, phishing, and denial of service attacks, they make information security hard to maintain.

It will become even more challenging as more devices and systems get internet-connected, massive amounts of data that needs securing are generated, and newer technologies such as the Internet of Things and 5G gain ground. The democratisation of powerful computing technologies, such as distributed computing and the public cloud, only accentuates the issue.

Indeed, cyber threats can become a major, enduring threat to the world, says the World Economic Forum.

How real the threat is can be gleaned from the formation of the European Union’s (EU) law enforcement agency, Europol’s Joint Cybercrime Action Taskforce, facilitating cross-border collaboration to combat cyber crime by 16 EU member countries, as well as the U.S., Canada, and Australia, among others.

A Forrester study said 88% of respondents believe offensive AI is inevitable, with nearly half of them expecting AI-based attacks within the next year. With AI-powered attacks on the horizon, the study notes it “will be crucial to use AI as a force multiplier.”

Automated hacking

Increasing automation, a reality of the modern age, provides advantages such as speed, accuracy, and relief from monotonous tasks. Perversely, it has also sparked off automated hacking or hacking on an industrial scale in the form of multiple and more ‘efficient’ attempts that can cause massive financial losses and destroy the organisational reputation. They are completely automated, from reconnaissance to attack orchestration, and speedily executed, leaving little time for…

Source…

Gibson Dunn | Artificial Intelligence and Automated Systems Legal Update (1Q21)


April 23, 2021

Click for PDF

Regulatory and policy developments during the first quarter of 2021 reflect a global tipping point toward serious regulation of artificial intelligence (“AI”) in the U.S. and European Union (“EU”), with far-reaching consequences for technology companies and government agencies.[1]   In late April 2021, the EU released its long-anticipated draft regulation for the use of AI, banning some “unacceptable” uses altogether and mandating strict guardrails such as documentary “proof” of safety and human oversight to ensure AI technology is “trustworthy.”

While these efforts to aggressively police the use of AI will surprise no one who has followed policy developments over the past several years, the EU is no longer alone in pushing for tougher oversight at this juncture.  As the United States’ national AI policy continues to take shape, it has thus far focused on ensuring international competitiveness and bolstering national security capabilities.  However, as the states move ahead with regulations seeking accountability for unfair or biased algorithms, it also appears that federal regulators—spearheaded by the Federal Trade Commission (“FTC”)—are positioning themselves as enforcers in the field of algorithmic fairness and bias.

Our 1Q21 Artificial Intelligence and Automated Systems Legal Update focuses on these critical regulatory efforts, and also examines other key developments within the U.S. and Europe that may be of interest to domestic and international companies alike.  As a result of several significant developments in April, and to avoid the need for multiple alerts, this 1Q21 update also include a number of matters from April, the beginning of 2Q21.

________________________

A.         U.S. National AI Strategy
B.         National Security & Trade
C.         Algorithmic Accountability & Consumer Safety
D.         FDA’s Action Plan for AI Medical Devices
E.         Intellectual Property Updates
F.         U.S. Regulators Seek Input on Use of AI in Financial Services

A.         EC Publishes Draft Legislation for…

Source…