Tag Archive for: deepfakes

Deepfakes, Ransomware Identified As Imminent Threats For 2024 In India: Report


(MENAFN– IANS) New Delhi, March 22 (IANS) Artificial Intelligence (AI)-generated deepfakes, multi-factor authentication (MFA) fatigue attacks, and complex ransomware incidents are identified as imminent threats for 2024 in India that require urgent attention, a new report said on Friday.

Looking ahead to 2024, Seqrite, the enterprise arm of global cybersecurity solutions provider Quick Heal, anticipated emerging challenges that demand vigilance and strategic preparedness.

“With the rise of AI-powered threats like BlackMamba and the prevalence of Living off the Land attacks, Chief Information Security Officers (CISOs) must adopt advanced evasion techniques and heightened defences to combat evolving threats effectively,” the experts said.

According to the report, the upcoming 2024 elections are poised to attract phishing attacks exploiting political interests, while supply chain vulnerabilities underscore the need for collaborative cybersecurity efforts between the public and private sectors.

Moreover, the report emphasised the importance of implementing resilient strategies to mitigate ransomware threats through practices such as regular data backups, network segmentation, and prompt isolation of affected systems.

“CISOs are encouraged to maintain vigilance regarding evolving cyber regulations and compliance standards, aligning security policies accordingly to ensure continual compliance and resilience,” the experts stated.

Further, the report highlighted the significance of embracing emerging technologies like AI, quantum computing, and IoT (Internet of Things), while remaining cognizant of the associated cybersecurity risks.

It also underscored the importance of fostering collaborative relationships among CISOs and security professionals to collectively enhance organisations’ cybersecurity posture and response capabilities.

MENAFN22032024000231011071ID1108010495


Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses,…

Source…

Sexually explicit deepfakes: women are more likely to be exploited



Abuse of artificial intelligence tools leads to sexually explicit material of real people, according to a survey for a cyber firm. It collected data from over 2000 Brits and found that half were worried about becoming a victim of deepfake pornography, and near one in ten, 9 per cent, reported either being a victim of it, knowing a victim, or both.

The anti-virus and internet security product company ESET points to the rising problem of deepfake pornography, as recently highlighted by explicit deepfakes of the US singer Taylor Swift being viewed millions of times. The firm reports a new form of image-based sexual abuse in the UK, quoting at least 60pc of all revenge pornography victims being women (according to the UK Council for Internet Safety). In the rcently-passed Online Safety Act, creating or inciting the creation of deepfake pornography became a criminal offence. However, the survey suggests that this has not done much to alleviate fears around the tech, as most, 61pc of women were reporting concern about being a victim of it, in comparison to less than half (45pc) of men.

Near two in five (39pc) of those surveyed by Censuswide believe that deepfake pornography is a significant risk of sending intimate content, yet about a third (34pc) of adults have still sent them. Of those that do, the research suggests that a majority, 58pc regret sharing them, whether they say ‘Yes, I would never send an intimate photo or video again’ or ‘Yes, but I would send an intimate photo or video again’.

The percentage of people sending intimate images or videos drops to 12pc in the under-18s, perhaps due to the fact that a majority, 57pc of teenagers surveyed are concerned about being a victim of deepfake pornography.

Despite interest in deepfakes soaring, people are still taking risks, the firm suggests, as just under one third (31pc) admitted to sharing intimate images with their faces visible. The research found that the average age at which someone receives their first sexual image is 14.

Jake Moore, Global Cybersecurity Advisor, ESET said: “These figures are deeply worrying as they show that people’s online habits haven’t adjusted to deepfakes yet. Digital images…

Source…

How deepfakes ‘hack the humans’ (and corporate networks)


Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Once crude and expensive, deepfakes are now a rapidly rising cybersecurity threat.

A UK-based firm lost $243,000 thanks to a deepfake that replicated a CEO’s voice so accurately that the person on the other end authorized a fraudulent wire transfer. A similar “deep voice” attack that precisely mimicked a company director’s distinct accent cost another company $35 million.

Maybe even more frightening, the CCO of crypto company Binance reported that a “sophisticated hacking team” used video from his past TV appearances to create a believable AI hologram that tricked people into joining meetings. “Other than the 15 pounds that I gained during COVID being noticeably absent, this deepfake was refined enough to fool several highly intelligent crypto community members,” he wrote.

Cheaper, sneakier and more dangerous

Don’t be fooled into taking deepfakes lightly. Accenture’s Cyber Threat Intelligence (ACTI) team notes that while recent deepfakes can be laughably crude, the trend in the technology is toward more sophistication with less cost.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

In fact, the ACTI team believes that high-quality deepfakes seeking to mimic specific individuals in organizations are already more common than reported. In one recent example, the use of deepfake technologies from a legitimate company was used to create fraudulent news anchors to spread Chinese disinformation showcasing that the malicious use is here, impacting entities already. 

A natural evolution

The ACTI team believes that deepfake attacks are the logical continuation of social engineering. In fact, they should be considered together, of a piece, because the primary malicious potential of…

Source…

The rise of deepfakes in job interviews: Why we should be concerned


By Susan Armstrong

If you’re fearful of a future where a potential employer can’t tell the difference between a real applicant and a computer-generated forgery, aka a deepfake, and offers the job to them instead of you, you have reason to be a little alarmed.

The FBI’s Internet Crime Complaint Centre (IC3) released a Public Service Announcement (PSA) warning employers and job seekers about the rising risk of deepfakes during the recruitment process.

Sure, watching startlingly accurate deepfake videos of actors like Tom Cruise can be fun, albeit a little unnerving at times.They’re so popular there’s now a TikTok account dedicated entirely to them.

There’s also the brilliantly executed Spider-man: No Way Home trailer that replaces Tom Holland’s face with the original Tobey Maguire.

And Korean television channel MBN showed how easily deepfakes could become part of everyday mainstream media by presenting viewers with a deepfake of its own news anchor, Kim Joo-Ha.

But the phenomenon is growing rapidly online and has the potential to become very harmful.

Earlier this year, Meta said it removed a deepfake video that claimed to show Ukrainian president Volodymyr Zelenskyy demanding Ukrainian forces to lay down their arms amid Russia’s invasion.

Deepfakes are applying for home office positions

Just as concerning is the harm that individuals could face from being targeted by deepfakes.

“The use of the technology to harass or harm private individuals who do not command public attention and cannot command resources necessary to refute falsehoods should be concerning,” the US Department of Homeland Security warned in a report about deepfake technology.

Now that cybercriminals are infiltrating organisations with deepfakes, this poses very damaging threats.

According to the FBI, they’re applying for working from home positions that include “information technology and computer programming, database, and software related job functions. Notably, some reported positions include access to customer PII, financial data, corporate IT databases and/or proprietary information”.

How can companies prevent it?

When you consider that more than 34 per cent of businesses around the globe are affected…

Source…