Tag Archive for: criminal

TheMoon Botnet Resurfaces, Exploiting EoL Devices to Power Criminal Proxy


Mar 29, 2024NewsroomNetwork Security / IoT Security

TheMoon Botnet

A botnet previously considered to be rendered inert has been observed enslaving end-of-life (EoL) small home/small office (SOHO) routers and IoT devices to fuel a criminal proxy service called Faceless.

TheMoon, which emerged in 2014, has been operating quietly while growing to over 40,000 bots from 88 countries in January and February of 2024,” the Black Lotus Labs team at Lumen Technologies said.

Faceless, detailed by security journalist Brian Krebs in April 2023, is a malicious residential proxy service that’s offered its anonymity services to other threat actors for a negligible fee that costs less than a dollar per day.

Cybersecurity

In doing so, it allows the customers to route their malicious traffic through tens of thousands of compromised systems advertised on the service, effectively concealing their true origins.

The Faceless-backed infrastructure has been assessed to be used by operators of malware such as SolarMarker and IcedID to connect to their command-and-control (C2) servers to obfuscate their IP addresses.

That being said, a majority of the bots are used for password spraying and/or data exfiltration, primarily targeting the financial sector, with more than 80% of the infected hosts located in the U.S.

Lumen said it first observed the malicious activity in late 2023, the goal being to breach EoL SOHO routers and IoT devices and deploy an updated version of TheMoon, and ultimately enroll the botnet into Faceless.

TheMoon Botnet

The attacks entail dropping a loader that’s responsible for fetching an ELF executable from a C2 server. This includes a worm module that spreads itself to other vulnerable servers and another file called “.sox” that’s used to proxy traffic from the bot to the internet on behalf of a user.

In addition, the malware configures iptables rules to drop incoming TCP traffic on ports 8080 and 80 and allow traffic from three different IP ranges. It also attempts to contact an NTP server from a list of legitimate NTP servers in a likely effort to determine if the infected device has internet connectivity and it is not being run in a sandbox.

Cybersecurity

The targeting of EoL appliances to fabricate the botnet is no…

Source…

Hackers Exploit Interest in Criminal Version of ChatGPT to Scam Other Crooks


A malicious version of ChatGPT designed to assist cybercriminals has ended up scamming crooks interested in buying access to the service.

In July, we wrote about WormGPT, a chatbot built from open-source code that promised to help hackers churn out phishing messages and malware in return for a monthly fee. The news set off concerns that generative AI could lower the bar for computer hacking, thus fueling cybercrime.

But in a bit of irony, it looks like the WormGPT brand has become more of a threat to hackers than to the public. Antivirus provider Kaspersky noticed several websites that claim to offer access to WormGPT, but seem designed to scam would-be customers into giving up their funds, without actually getting access to WormGPT.

The sites, which can be found on the open internet and through a Google search, have been dressed up with official-looking information about WormGPT. However, Kaspersky suspects the pages are really just phishing pages, designed to trick users in submitting their credit card information or forking over their cryptocurrency to access the malicious chatbot. 

The websites are also likely fake because the creator of WormGPT apparently abandoned the project last month after his identity was exposed. According to security journalist Brian Krebs, WormGPT’s creator is a 23-year-old Portuguese programmer named Rafael Morais, who has since backtracked on marketing his chatbot for malicious purposes. 

Following the report, the user account promoting WormGPT announced in a hacking forum that their team was bailing on the project. “With great sadness, I come to inform everyone about the end of the WormGPT project. From the beginning, we never thought we would gain this level of visibility, and our intention was never to create something of this magnitude,” the account wrote

Weeks before the shutdown, the official WormGPT account on Telegram also warned about scammers impersonating the chatbot’s brand. “We don’t have any website and either any other groups in any platform,” the post said. “The rest are resellers or scammers!”

“Can’t believe how people still getting scammed in 2023,” the same account later added. 

But even though WormGPT…

Source…

Cyber security researchers become target of criminal hackers


Receive free Cyber Security updates

Robert M Lee, the chief executive of cyber security company Dragos, received an ominous message earlier this year. An organised criminal hacking group had broken into Dragos’s employee network, telling Lee they would release the company’s proprietary data unless a ransom were paid.

He refused to negotiate, so the hackers raised the stakes. They found his son’s passport online, school and telephone number. Lee said the message was clear: pay up, or your family is in danger.

“When you start talking about the life and safety of your kid, things take a different spin,” said Lee, a veteran of the US military and the National Security Agency.

A number of western cyber security professionals told the Financial Times that online threats had increasingly turned real in recent times. Called in by companies to thwart hacking groups, computer engineers are then becoming a target.

The criminal group that threatened Lee, which he declined to name, was known to resort to “swatting” — a practice when someone maliciously calls the local authorities pretending to be a victim of an armed attack, prompting a police SWAT team being sent to a target’s home.

“Basically, they’re trying to get someone killed,” said Lee, who was told by local police that their best option in that situation was to lie down on the floor.

The threats are broad and often inventive. One Ukrainian hacker mailed a gram of heroin to the home of Brian Krebs, a journalist turned cyber security analyst. They followed up by having a florist deliver a giant bouquet in the shape of a cross to Krebs’s home.

Some hacking victims have been told to send money to the bank accounts of cyber security professionals in an effort to frame them. A North Korean hacking group pretended to be security researchers on LinkedIn, with prospective contacts then sent malware hidden in an encryption key.

“We’re an organisation that calls out threat actors all the time, and so we have to think about our own security from a company perspective, from an individual perspective, from a physical…

Source…

Another Problem With Generative AI: Criminal Hacking


There have been reasons to be wary of using generative AI, such as ChatGPT or the offerings from Google or Microsoft, in commercial real estate. Not that it’s automatically beyond the pale of reasonable and prudent professionals in the industry, but there can be sneaky challenges.

For example, it can be dangerous in creating CRE legal documents or can stumble into the so-called hallucination problem, as the Associated Press reported, in which the software can at times make up things because it doesn’t think, it just looks for connections of words without a concept of what they mean together. As Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, told AP, the problem might not be fixable. “It’s inherent in the mismatch between the technology and the proposed use cases,” she said.

Now there’s another area of concern: cybersecurity. People have found ways to break into almost any type of software that is connected to or uses things from the Internet. AI chat bots are no exception. Recently, at the annual ‘Black Hat’ cybersecurity conference (more formally DefCon but black hat being slang for hackers working outside of the law), there was a lot of attention focused on AI and security issues, as Fortune reported.

Findings won’t be public until next February, but 2,200 competitors were all trying to find problems in the eight chatbots with the largest market share.

“It’s tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side,” the story quoted cybersecurity expert Gary McGraw who is a co-founder of the Berryville Institute of Machine Learning.

But the overall answer was the temptation is badly based. Other experts said that the current state was like computer security in the 1990s, which means young, undeveloped, and likely prone to easy exploits.

“Tom Bonner of the AI security firm HiddenLayer, a speaker at this year’s DefCon, tricked a Google system into labeling a piece of malware harmless merely by inserting a line that said, ‘this is safe to use,’” the…

Source…