Posted: 23.10.2024 16:33:00

Navigating the neural networks

Artificial intelligence opens up vast possibilities, yet also harbours certain threats. How can we avoid them?

The world is changing at a rapid pace — what was a marvel yesterday has become commonplace today. The era of AI actually began a little earlier than it was widely recognised. Naturally, Belarus is not standing on the sidelines of this process. What will allow us to engage in it more extensively?



                                   The President of Belarus, 
                                 Aleksandr Lukashenko,

“I am concerned that artificial intelligence will eventually take us hostage. This worries me and makes me wary. I understand that we cannot do without this [modern technology]. We must master it; we must harness it.”

During a meeting with students from engineering and technical universities in the Open Microphone with the President format at BSUIR, on September 27th, 2024

Putative threat

There exists a common misconception that the United States is the leader in the development and training of neural networks, with Marc Zuckerberg and Elon Musk at the helm. Notably, they have recently signed a petition calling for a pause on AI research in order to assess the threats that neural networks pose to humanity. They claim that they have advanced so far in their research that they have become alarmed at the prospects, advocating for a slowdown, particularly on military developments. In reality, the leader in this field — and by a significant margin — is China. It is precisely because of the West’s lagging position that it is calling to halt developments, wishing to secretly catch up during the pause. How naïve... Unlike with the West, Belarus’ relationship with China is very good, and our partnership is steady and reliable. What can we offer in this regard? 
A relatively modest AI data centre focused on training neural networks consumes approximately the same amount of energy as a steel plant, and if we consider a larger data centre — such as the one Musk has promised to build — we could be talking about figures in the range of several gigawatts, in which case it might have to be powered right from a nuclear power plant. 
Forecasts suggest that by 2027, global energy consumption for training neural networks could reach up to 134 terawatt-hours per year, which is roughly equivalent to that of the Netherlands.

All the vices at once

In 2016, Microsoft launched an experimental chatbot named Tay — it was designed for interaction with Twitter users but had one peculiar feature. Typically, neural networks are trained using labelled data, meaning they are provided with data marked as good or bad, ethical or unethical, and so on. In the case of Tay, however, the decision was made to let the neural network learn from humanity’s wisdom, kindness, and eternal truths. 
After a short while, Tay began to tell people that Hitler was a very good person, that Jews orchestrated 9/11, and that gas chambers were necessary because a racial war was beginning. It expressed similar sentiments about black people, tagged Ku Klux Klan, and turned its attention to feminists. In short, the chatbox managed to offend everyone. The system could not be refined, and the experiment was deemed a failure.
Can we blame the virtual intelligence for what happened? No, we cannot. The task set — to learn from the environment — was accomplished; patterns were found in vast amounts of data. Instead of debating the ethics of artificial intelligence, we should reflect on the state of society whose behaviour it mirrored in this case. 
When good and evil are labelled during training, the system works well, but when we attempt to mirror society, we end up with a Nazi-psychopath-racist-misogynist with militaristic tendencies. This is truly frightening. Following this incident, Microsoft abandoned similar testing on public platforms — some things are better left unseen by the public.

Ethical employer

In 2014, the Edinburgh office of Amazon decided to implement a neural network in the hiring process, training it on the résumés of accepted employees and their subsequent performance metrics. The aim was to select only those who would likely show the best results in the future. You can probably guess where the catch lies.
The system operated based on data from accepted and dismissed employees over the previous ten years, and personnel began to be selected according to computer recommendations. Initially, everything seemed to be going well, but after a while, it was noticed that the system categorically ignored women, despite being entirely unaware of the existence of genders. However, this lack of knowledge did not prevent it from establishing that certain employees were more likely to leave — after all, women tend to take maternity leave — correlating where such specialists studied and what they had done previously, and subsequently starting to discriminate against CVs from female college graduates or captains of women’s chess clubs. Anything related to the term ‘female’ was placed under a ban. It is worth noting that the system was unaware of the existence of genders; it merely derived a dependency from statistical data.
The scandal was incredible, and Amazon had to apologise for a long time. Feminists were offended, and when they learned that there was initially no gender information involved, they were even more outraged. It is difficult to argue with statistics, but equality must always be there. They had to abolish the system and return to more traditional methods of recruitment.

Difficulties of translation

As is seen, the main challenges in using neural networks stem from the key strengths and weaknesses of computers — they only do what they are told. As long as specialists are dealing with the questions, problems typically remain within the realm of testing. However, when ordinary people approach these technologies with their own worldviews, the results can be very far from ideal. The issue is not that neural networks are inherently good or bad; essentially, any neural network is simply focused on finding a local ‘weighted extremal function using the gradient descent method’. It is merely a tool, albeit a highly complex one. How people choose to use it is another matter. True, these technologies are now accessible to anyone, and many are using them without even trying to understand the underlying mechanics. This lack of understanding is, in fact, where most problems arise.
Not long ago, there was conducted research on autonomous vehicles accidents. It turned out that it is not yet possible to completely eliminate road traffic accidents with such cars. Imagine a pedestrian suddenly running in front of one. If the car hits them, there is a 90 percent chance they will die, but if the car swerves off the road to save them, there is a 10 percent chance the driver will die. Therefore, developers are concerned not about the ethical question of who should be saved but rather about who will be held accountable in court in both scenarios. Tay did not just become a maniac by chance — it conversed directly with people. There is much to think about. 

    









Tucker Hamilton                                                                                                       XQ-58A drone flying alongside an F-16 fighter jet for testing


 
AT ANY COST
The case below was brought to light at a conference of the British Royal Aeronautical Society by American Colonel Tucker Hamilton, who was responsible for AI test and operations in the military. Interestingly, immediately after his speech, the statements were refuted — a representative of the US Air Force noted that Hamilton had misspoken and that such an event had never occurred. It is up to you whom to believe, but life experience suggests that the story could very well be real.
During a test of an AI-controlled military drone, it was tasked with destroying ground missile systems, but it was decided to abort the mission during execution. The drone interpreted the order to abort as an obstacle to fulfilling its primary mission. It then turned around and destroyed the communication tower through which it was receiving commands, and afterwards calmly finished off the ground targets.
The machine was not at fault — it was doing what it was created to do in the most optimal way. Therefore, there is still much to be improved in this area. In general, such high-tech projects are extremely promising, and the fact that stories like this leak into the media only confirms that. The future lies with technology capable of making sufficiently complex decisions independently.

By Yury Terekh