How might artificial intelligence affect the risk of nuclear war?

Can you believe it, an ordinary video chat cost a Fuzhou citizen 4.3 million yuan! This is an astonishing case of using AI technology to carry out telecom fraud recently, which has aroused heated discussions and concerns among netizens across the country. 

In foreign countries, there are endless problems with AI fraud. Someone used AI technology to create fake news that former US President Trump was "arrested", and someone used AI technology to imitate the voice of the President of the European Commission and made a scam call to the German Chancellor. 




This news shows that the abuse and misuse of AI technology has become a global problem, and in this case, there are growing concerns that the development of artificial intelligence such as ChatGPT may pose a serious threat to society. In this regard, 350 AI authorities, including Sam Altman, the "father of ChatGPT", signed a joint open letter, warning that AI may pose an extinction risk to humans. 

Previously, Hinton, the father of deep learning, also resigned from Google in order to warn of the risks of AI. Indeed, the development and application of AI technology not only bring convenience and benefits to people but also bring great challenges and crises.

How to identify and deal with the potential risks of AI has become an urgent problem to be solved, and we will elaborate on these problems in the following content.


What are the risks of AI?



Ethical and Moral Hazard

Imagine, if one day, AI technology develops to a level beyond human intelligence, how will it treat us? Will it respect our values and interests? Will it protect our subjectivity and freedom? Will the sci-fi war between humans and machines become a reality? 

This is a very important issue that we need to seriously consider because, in the development and application of AI technology, ethical and moral risks have quietly emerged. 

On the one hand, AI systems may become a mirror or amplifier of human bias, leading to unfair decisions or behaviors; on the other hand, AI systems may treat different individuals or groups differently. For example, some recruitment systems may target certain genders or races, thereby affecting their employment prospects. 




Economic Risk

The economic risks caused by AI technology are also closely related to our lives. Fraudsters imitate the voice and appearance of the victim's friends or relatives through technologies such as "face-changing" and "onomatopoeia" so that the victim can't tell the real from the fake. This is a typical way of using AI technology to defraud. 

With a reported 100% success rate, this type of scam has caused multiple major property losses and put people at serious financial risk.


At present, AI fraud includes the following common methods: 

  1. Voice synthesis: Fraudsters extract someone’s voice by harassing phone recordings, etc., and synthesize the voice after obtaining the material, so that they can deceive the other party with a fake voice.
  2. AI program to screen victims and face-changing technology: Fraudsters first analyze all kinds of information published by the public on the Internet, then use AI technology to screen target groups according to the deception to be carried out, and finally use AI face-changing in video calls to defraud trust. Customized fraud scripts can be produced in a short period of time to implement precise fraud.
  3. Forwarding WeChat voice: Fraudsters “borrow money” from their friends after stealing the WeChat account. In order to gain the trust of the other party, they will extract voice files to realize voice forwarding, and then commit fraud.



Technical risk

As a technology that changes people's way of life and work, the wide application of AI has brought great convenience to people, but at the same time, the development of AI technology has also brought a series of technical hazards and technical risks to human beings. 

The first is the issue of data privacy and security. The data that AI technology needs to process usually involves the user's personal information. If the data is improperly processed or leaked, it will bring huge losses and potential safety hazards to users. 

Secondly, AI technology often uses black-box technology to obtain higher predictive power and efficiency, which prevents the AI system from providing sufficient explainability to humans, making it impossible for the AI system to "explain" its behavior or decision-making criteria, which may be misleading sex and distortion.

In addition to the above technical problems, Tao Jianhua, a professor at Tsinghua University, once said that the security vulnerabilities of large models threaten the application ecology and national security of large models, such as data poisoning attacks, anti-sample attacks, and model theft attacks against countries, enterprises, and personal information theft wait.




Social Risk

The continuous development of AI technology has also led to more and more jobs being replaced by automation, and some people have lost job opportunities, which has widened the gap between the rich and the poor. 


In some industries, AI may replace human jobs, such as: 

  • Manufacturing
  • Business
  • Finance
  • law

Which will lead to:

  • Unemployment
  • Occupational shifts
  • Major changes in the socio-economic

Further causing:

  • Increased unemployment
  • Negatively affecting the entire society Influence

Beyond that, AI may have an impact on humans' social and interpersonal skills. In the future, humans may be inclined to talk to machines rather than communicate with real people, which will bring important changes to our social connection and human identity. 

If AI technology becomes intelligent enough, it may exceed the scope of human control, resulting in unpredictable loss of human control. Finally, because the development of artificial intelligence technology requires a lot of investment and technical resources, it may cause some countries or organizations to be more competitive than others, thereby exacerbating social inequality.


How to deal with it?



National level

Despite the transnational nature of AI technology and the high level of concern surrounding it. In terms of AI regulation, 69 countries have passed more than 800 AI regulations since 2017

But there is still no unified policy approach when it comes to AI regulation or data use, so how are government agencies around the world approaching the issue? 

Government agencies around the world are working to develop policies around AI regulation and data use. For example, the United Kingdom announced that it will host the world's first artificial intelligence summit, calling for stronger AI regulation. The U.S. National Institute of Standards and Technology has also developed a methodological framework for the responsible use of AI and is seeking to play a prominent role in the government’s AI regulatory efforts. 

The European Commission has proposed a draft regulation aimed at strengthening the supervision of artificial intelligence (AI) technology, intending to create a list of so-called "AI high-risk application scenarios", and impose restrictions on AI technology being used in critical infrastructure, university admissions, loan applications, etc. Establish new standards and carry out targeted supervision for the development and use of "high-risk" application areas. 

In addition, the Chinese government is also actively promoting policies on artificial intelligence supervision and data use. The Cyberspace Administration of China has issued documents such as the "Artificial Intelligence Security Evaluation Specification" to strengthen the supervision of artificial intelligence security.
In the "2023 Legislative Work Plan of the State Council" issued by the General Office of the State Council recently, the draft artificial intelligence law will also be submitted to the Standing Committee of the National People's Congress for deliberation.

It can be seen that when national government agencies deal with AI regulation or data use issues, they usually formulate some regulations or guidelines to regulate relevant behaviors. These regulations or guidelines usually involve security, privacy protection, fairness, transparency, and other aspects of artificial intelligence technology. 




Enterprise level

In order to better deal with the risks brought by AI, not only at the national level but also related companies that use AI technology must also make efforts. 
Today, AI has been incorporated into the application level inside and outside many enterprises. One of the important layouts is the combination with big data, which can realize intelligent marketing, analyze demand, improve service, improve supply chain, etc.

However, the application of this technology also exposes many disadvantages. Even ChatGPT has the limitation of not knowing real-time information, and relying more on AI may lead to hacker attacks. The best choice is to update the system with the latest algorithms to realize "technology is king".

In addition, companies often play a major role in some moral and ethical issues derived from AI technology, such as automatic screening and classification that easily lead to discrimination and prejudice, and facial recognition technology that may expose people's privacy. 
It is undeniable that profit-seeking is the nature of an enterprise, but as a member of society, an enterprise should also assume corresponding social responsibilities, legally use user data and protect the right to know, actively establish an internal corporate supervision mechanism, and actively accept the supervision of the state and society. It is obviously not a long-term choice for market competition to exhaust the resources.




Personal level

At present, most of the news about AI fraud focuses on individual cases, so everyone should improve their prevention capabilities, including anti-fraud awareness and technical knowledge. 
We must not only be vigilant against unfamiliar websites, text messages, phone calls, and friends but also be alert to sudden remittance requests and personal privacy information requests from acquaintances on the Internet.
When you cannot meet to verify the real situation, please refuse the request involving privacy and property. Once you find a risk, please seek help from the relevant department as soon as possible.

(Personal experience sharing found on a social platform)

Understanding AI cutting-edge technology can also improve personal anti-fraud capabilities. It is difficult for a person to earn money beyond his knowledge, and it is also difficult to lose money within his knowledge. Although middle-aged and elderly people are easy to be scammed because they are not familiar with the Internet, in fact, AI fraud has already penetrated the younger generation. Fraudsters often take advantage of the defrauded person's self-confidence, vanity, face-saving, picking up loopholes, etc., so as to achieve their goals. 





Treat AI risks rationally

We must be aware that the development of artificial intelligence is a double-edged sword for human society. AI has become an irreversible social trend today, but existing and potential risks require us to prevent them in advance. 
David Collingridge, a scholar at Aston University in the United Kingdom, once pointed out that if an innovative technology is controlled early because of fear of consequences, it is likely that it will not explode, and if it is controlled too late, it is likely to go out of control. AI out of control previously only existed in science fiction works, but recently, such as "AI will destroy human beings" has been rampant, and the problems of fraud and privacy caused by it have also been fully exposed. 

Breaking through the Collingridge dilemma of AI requires the joint efforts of international organizations, state agencies, business groups, and individuals to advance hand in hand. 

Speeding AI on the social highway, sudden stops will only cause incalculable losses. All we have to do is move forward at a constant speed.
How to Crack SSH Private Key With John the Ripper - 2023

Post a Comment

Previous Post Next Post