An insider will manipulate AI to frame an innocent individual
It’s an unfortunate fact that, because people train artificial intelligence (AI), AI adopts the same human biases we thought it might ignore. Despite this being the case, the legal system has been happy to employ the technology to try and secure prosecutions. This was seen in a judge ordering Amazon to divulge Echo recordings in a US double murder trial. Unless guarded against, this will allow nefarious insiders to feed AI false information to convict an innocent party – the criminal justice system may well have to wrestle with such circumstances in future.
The increasing spread of biometrics will bring unforeseen consequences
In another example of technology advancing beyond the pace of regulation, unfortunate members of the public stand to be victimised through their biometric data. Such intrinsically personal information may be stolen and used time and time again for fraud. Unlike stolen data such as credit card details, it is not possible to change the compromised information, outside of changing one’s face. Unfortunately, the industry might only standardise regulation once the dangers of biometric data have been made fully apparent.
Ransomware attacks will target critical business infrastructure
Ransomware has already become an oft-utilised weapon in the arsenal of the cybercriminal. Indeed, researchers found that the average pay-out by victims increased in 2019 to $41,000 USD. Given the success of this tactic, attackers will now train their sights upon critical business infrastructure. The health sector has already been hit across the Western world and power grids may now become the next lucrative target for ambitious cybercriminals. Such attacks may have worrying societal implications, eroding trust in a government’s ability to protect its citizens.
Iran’s offensive cyber operations will grow at a faster rate than China’s
Information surrounding Iran’s cyber operation capabilities is especially relevant given the extreme geopolitical tension between Iran and the U.S. that the new year has already heralded. Iran is now due to overtake China – long seen as the West’s biggest adversary in this sphere – and the current diplomatic climate points to the Islamic Republic exploring every avenue to bloody the American nose. There is far less in the way of a deterrent to stay Iran’s hand – unlike China, the theocracy has no diplomatic relations with the U.S. and the crippling sanctions already imposed upon it means that it cares little for trade ramifications. Combine these factors, and the pace of current events, and we are due to see far more cyber activity from this bellicose state.
Quantum computing will reach more widespread use, including towards malicious ends
Google’s ‘Sycamore’ project is leading the charge towards quantum computing and, although such projects are still far off advanced quantum computing, further progress is sure to be made in 2020. Such developments are sure to change the way in which we perform cryptography and this technology will eventually enter into widespread use. Already, Microsoft has announced a new Azure Quantum service, through which select customers will be able to run quantum code and make use of quantum hardware. These advancements are expected to lead to developments in modern AI, and we can expect more efficient AI data analysis and a better decision-making process. With quantum technology’s eventual uptake by the masses, we may see widespread development, adoption, and usefulness of quantum and modern AI throughout 2020. This technology will inevitably be put towards both legitimate and malicious ends.
Deepfakes as a defence
The phenomena of deepfakes has already entered the public consciousness, with Facebook recently announcing its intention to ban the technology. The pernicious use of such videos has also coloured debate in the Western world about its use by political operatives, and organised crime groups have already successfully used deepfakes to impersonate executives to secure the illicit transfer of large sums of cash. Without a doubt, this trend will continue.
Beyond the direct implications of their use itself, deepfakes will serve to further muddy the water surrounding who might have said or done something. Anything captured on video will be called into question as we truly continue our march into a post-truth era. In 2020 and beyond, we can expect to see deepfakes used as a defence against professional or legal repercussions of events purportedly caught on video. The advent of such technology means that ‘seeing something to believe it’ no longer holds weight.
Eavesdropping on smart speakers will result in a major political scandal
If our smart devices are listening to us to improve the decisioning in the devices’ AI, then a human needs to be listening too. Live microphones have caused enough embarrassment for our political class before such technology was even conceptualised, and we can expect the proliferation of listening ears to cause a corresponding uptake in scandal.
Behind-the-scenes, employees are well-positioned to become whistle-blowers and we are due an explosive political scandal in 2020 to originate from such a source. If it’s any consolation to politicians, they can always resort to the deepfake defence.