Tag Archive for: Ethics

Ransomware Negotiation and Ethics: Navigating the Moral Dilemma


Ransomware attacks have developed in recent years from mere data breaches to sophisticated operations. These attacks often involve targeting organizations, and these cyber criminals have gone from a minor speck on the digital security radar — to a widespread and highly advanced type of cybercrime. Nowadays, businesses of all sizes and industries find themselves trapped in a game of digital chess. Their opponents use nefarious tactics to compromise essential and sensitive data, holding said data hostage for exorbitant ransoms, with ransomware attacks increasing 105% in 2021.

The difficult choice of whether to engage with hackers holding critical information hostage has repercussions beyond the digital sphere, challenging the ethical foundations of businesses and institutions. A thorough analysis of the ethics behind choosing to negotiate or not is necessary as businesses struggle with the conflicting demands of protecting their operations and honoring their ethical obligations.

The Case for Negotiation

As organizations confront the imminent threat of data loss, operational disruption, and potential harm to stakeholders that may be caused by ransomware, a compelling argument emerges in favor of engaging in negotiations. Therefore, we must examine the most effective techniques for mitigating the effects of ransomware attacks. Although it may appear counterintuitive to some, negotiation can be a useful strategy for safeguarding the interests of victims and the larger digital ecosystem.

    • Data Protection and Business Continuity: Because a business’s capacity to operate is significantly compromised when it is the target of ransomware, negotiation may provide enterprises access to crucial data and systems again, allowing them to resume operations quickly. Negotiation offers victims the opportunity to recover encrypted data while decreasing the impact on their everyday operations; this can be particularly crucial for medical institutions, emergency services, and other essential services that directly affect the safety and well-being of the general public.

Source…

Weaponizing Intelligence: How AI is Revolutionizing Warfare, Ethics, and Global Defense


Is artificial intelligence the future of global warfare?” If you find that question compelling, consider this startling fact: The U.S. Army, by leveraging AI in its logistics services, has saved approximately $100 million from analyzing a mere 10% of its shipping orders. In an era defined by rapid technological advances, the marriage of artificial intelligence (AI) with military applications is shaping a new frontier. From AI-equipped anti-submarine warfare ships to predictive maintenance algorithms for aircraft, the confluence of AI and defense technologies is not only creating unprecedented capabilities but also opening a Pandora’s box of complex ethical and strategic questions.

As countries around the globe accelerate their investment in the militarization of AI, we find ourselves at a watershed moment that could redefine the very paradigms of global security, warfare ethics, and strategic operations. This article aims to dissect this intricate and evolving landscape, offering a thorough analysis of how AI’s ever-deepening integration with military applications is transforming the contours of future conflict and defense—across land, cyberspace, and even the far reaches of outer space.

AI on Land, Sea, and Air – A Force Multiplier

The evolution of AI in military applications is reshaping the traditional paradigms of land, sea, and air warfare. In the maritime realm, take DARPA’s Sea Hunter as an illustrative example—an unmanned anti-submarine warfare vessel that can autonomously patrol open waters for up to three consecutive months. This autonomous behemoth promises to revolutionize the cost metrics of naval operations, operating at a daily cost of less than $20,000 compared to $700,000 for a conventional manned destroyer. On land, the U.S. Army’s Advanced Targeting and Lethality Automated System (ATLAS) represents another significant leap. By incorporating AI into an automated ground vehicle, the military aims to accelerate target acquisition, reduce engagement time, and significantly lower the logistical and human costs associated with ground operations. The ATLAS program follows earlier attempts like the remotely controlled Military Utility Tactical Truck,…

Source…

[ Day 38 – Synopsis ] 75 Days Mains Revision Plan 2022 – Internal Security & Ethics


 

NOTE: Please remember that following ‘answers’ are NOT ‘model answers’. They are NOT synopsis too if we go by definition of the term. What we are providing is content that both meets demand of the question and at the same time gives you extra points in the form of background information.


Internal Security


 

Q1. Examine the role of social media in fueling hate crimes in India. How does it affect the internal security of our country? 10M

Introduction

The recent beheading of a Hindu tailor, Kanhaiya Lal, in Rajasthan’s Udaipur has raised the issue of hate crime in India once again. Most hate crimes reported in India were targeted toward Dalits between September 2015 and December 2019 followed by Muslims. A total of 902 crimes were reported because of alleged hate – varying from caste, and religion to honor killing and love jihad.

Body

Social media plays a significant role in shaping democracy and active participation of people in decision making but in recent times social media has become a tool to disseminate hate speech and fake news, thus rising cases of hate crimes.

Role of social media in fueling hate crimes in India.

  • Promoting rumours – fake information or news spread through social media has real-life implications. For instance, recently online rumours regarding anti-national elements, and cow slaughtering through Facebook, and WhatsApp led to lynchings in rural areas.
  • Unable to find originator – Social media become a popular tool to disseminate hate messages, especially based on religion. The way the message spreads ( reaching large audiences in a short period)makes it difficult to identify the source and hold the user responsible.
  • No cross verification mechanism- Social media like Whatsapp, Twitter, etc act as a platform to spread fake news and hate speech, but there is no cross-verification mechanism by such platforms. People especially in the Rural section too accept them without any cross verification of the facts and then indulge in violence.
  • Polarizing political ideology – A study shows that more than 60% of information spread by political leaders through YouTube is false or has no evidence. Social media also acts as a channel for communication.
  • For example,…

Source…

AI And Human Ethics: On A Collision Course



AI systems can give rise to issues related to discrimination, personal freedoms, and accountability

Illustration: Chaitanya Surpur As the use of artificial intelligence (AI) becomes increasingly popular among private companies and government agencies, there are growing concerns over a plethora of ethical issues rising from its use. These concerns range from various kinds of biases, including those based on race and gender, transparency, privacy and personal freedoms. There are still more concerns related to the gathering, storing, security, usage and governance of data—data being the founding block on which an AI system is built.

To better understand the root of these issues, we must, therefore, look at these founding blocks of AI. Let’s look at mechanisms to predict weather, to see how data helps. If, today, there are accurate systems to predict a cyclone that is forming over an ocean, it is because these systems have been fed data about various weather parameters gathered over many decades. The volume and range of this data enables the system to make increasingly precise predictions about the speed at which the cyclone is moving, when and where it is expected to make landfall, the velocity of wind and the volume of rain. If there was inadequate or poor quality data to begin with, the prediction mechanism could not be built.
Similarly, in AI systems, algorithms—the set of steps that a computer follows to solve a problem—are fed data, in order to solve problems. The solution that the algorithm will come up with depends solely on the data it has received; it does not, cannot, consider possibilities outside of that fixed dataset. So if an algorithm receives data only about white, middle-aged men who may or may not develop diabetes, it does not even know of the existence of non-white, young women who might also develop diabetes. Now imagine if such an AI system was developed in the US or in China, and was deployed in India to predict the rate of diabetes in any city or state.

“We get training datasets, and we try to learn from them and try to make inferences from it about the future,” says Carsten Maple, professor…

Source…