Can you believe it, an ordinary video chat cost a Fuzhou citizen 4.3 million yuan! This is an astonishing case of using AI technology to carry out telecom fraud recently, which has aroused heated discussions and concerns among netizens across the country.
In foreign countries, there are endless problems with AI fraud. Someone used AI technology to create fake news that former US President Trump was "arrested", and someone used AI technology to imitate the voice of the President of the European Commission and made a scam call to the German Chancellor.
This news shows that the abuse and misuse of AI technology has become a global problem, and in this case, there are growing concerns that the development of artificial intelligence such as ChatGPT may pose a serious threat to society. In this regard, 350 AI authorities, including Sam Altman, the "father of ChatGPT", signed a joint open letter, warning that AI may pose an extinction risk to humans.
Previously, Hinton, the father of deep learning, also resigned from Google in order to warn of the risks of AI. Indeed, the development and application of AI technology not only bring convenience and benefits to people but also bring great challenges and crises.
How to identify and deal with the potential risks of AI has become an urgent problem to be solved, and we will elaborate on these problems in the following content.
What are the risks of AI?
Ethical and Moral Hazard
Imagine, if one day, AI technology develops to a level beyond human intelligence, how will it treat us? Will it respect our values and interests? Will it protect our subjectivity and freedom? Will the sci-fi war between humans and machines become a reality?
This is a very important issue that we need to seriously consider because, in the development and application of AI technology, ethical and moral risks have quietly emerged.
On the one hand, AI systems may become a mirror or amplifier of human bias, leading to unfair decisions or behaviors; on the other hand, AI systems may treat different individuals or groups differently. For example, some recruitment systems may target certain genders or races, thereby affecting their employment prospects.
Economic Risk
The economic risks caused by AI technology are also closely related to our lives. Fraudsters imitate the voice and appearance of the victim's friends or relatives through technologies such as "face-changing" and "onomatopoeia" so that the victim can't tell the real from the fake. This is a typical way of using AI technology to defraud.
With a reported 100% success rate, this type of scam has caused multiple major property losses and put people at serious financial risk.
At present, AI fraud includes the following common methods:
- Voice synthesis: Fraudsters extract someone’s voice by harassing phone recordings, etc., and synthesize the voice after obtaining the material, so that they can deceive the other party with a fake voice.
- AI program to screen victims and face-changing technology: Fraudsters first analyze all kinds of information published by the public on the Internet, then use AI technology to screen target groups according to the deception to be carried out, and finally use AI face-changing in video calls to defraud trust. Customized fraud scripts can be produced in a short period of time to implement precise fraud.
- Forwarding WeChat voice: Fraudsters “borrow money” from their friends after stealing the WeChat account. In order to gain the trust of the other party, they will extract voice files to realize voice forwarding, and then commit fraud.
Technical risk
As a technology that changes people's way of life and work, the wide application of AI has brought great convenience to people, but at the same time, the development of AI technology has also brought a series of technical hazards and technical risks to human beings.
The first is the issue of data privacy and security. The data that AI technology needs to process usually involves the user's personal information. If the data is improperly processed or leaked, it will bring huge losses and potential safety hazards to users.
Secondly, AI technology often uses black-box technology to obtain higher predictive power and efficiency, which prevents the AI system from providing sufficient explainability to humans, making it impossible for the AI system to "explain" its behavior or decision-making criteria, which may be misleading sex and distortion.
In addition to the above technical problems, Tao Jianhua, a professor at Tsinghua University, once said that the security vulnerabilities of large models threaten the application ecology and national security of large models, such as data poisoning attacks, anti-sample attacks, and model theft attacks against countries, enterprises, and personal information theft wait.
Social Risk
The continuous development of AI technology has also led to more and more jobs being replaced by automation, and some people have lost job opportunities, which has widened the gap between the rich and the poor.
In some industries, AI may replace human jobs, such as:
- Manufacturing
- Business
- Finance
- law
Which will lead to:
- Unemployment
- Occupational shifts
- Major changes in the socio-economic
Further causing:
- Increased unemployment
- Negatively affecting the entire society Influence
Beyond that, AI may have an impact on humans' social and interpersonal skills. In the future, humans may be inclined to talk to machines rather than communicate with real people, which will bring important changes to our social connection and human identity.
If AI technology becomes intelligent enough, it may exceed the scope of human control, resulting in unpredictable loss of human control. Finally, because the development of artificial intelligence technology requires a lot of investment and technical resources, it may cause some countries or organizations to be more competitive than others, thereby exacerbating social inequality.
How to deal with it?
National level
Despite the transnational nature of AI technology and the high level of concern surrounding it. In terms of AI regulation, 69 countries have passed more than 800 AI regulations since 2017.
Enterprise level
Personal level
Treat AI risks rationally
Breaking through the Collingridge dilemma of AI requires the joint efforts of international organizations, state agencies, business groups, and individuals to advance hand in hand.
Speeding AI on the social highway, sudden stops will only cause incalculable losses. All we have to do is move forward at a constant speed.
How to Crack SSH Private Key With John the Ripper - 2023