• Covid-19 – click here for the latest updates from Forum Events & Media Group Ltd

Posts Tagged :

AI

NIST attempts evaluation of user trust in AI

960 640 Stuart O'Brien

How do humans decide whether or not to trust a machine’s recommendations? This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems.

The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems.

According to NIST’s Brian Stanton, the issue is whether human trust in AI systems is measurable — and if so, how to measure it accurately and appropriately. 

“Many factors get incorporated into our decisions about trust. It’s how the user thinks and feels about the system and perceives the risks involved in using it.”

Stanton, a psychologist, co-authored the publication with NIST computer scientist Ted Jensen. They largely base the document on past research into trust, beginning with the integral role of trust in human history and how it has shaped our cognitive processes. They gradually turn to the unique trust challenges associated with AI, which is rapidly taking on tasks that go beyond human capacity. 

“AI systems can be trained to ‘discover’ patterns in large amounts of data that are difficult for the human brain to comprehend. A system might continuously monitor a very large number of video feeds and, for example, spot a child falling into a harbor in one of them,” Stanton said. “No longer are we asking automation to do our work. We are asking it to do work that humans can’t do alone.”

The NIST publication proposes a list of nine factors that contribute to a person’s potential trust in an AI system. These factors are different than the technical requirements of trustworthy AI that NIST is establishing in collaboration with the broader community of AI developers and practitioners. The paper shows how a person may weigh the factors described differently depending on both the task itself and the risk involved in trusting the AI’s decision.

One factor, for example, is accuracy. A music selection algorithm may not need to be overly accurate, especially if a person is curious to step outside their tastes at times to experience novelty — and in any case, skipping to the next song is easy. It would be a far different matter to trust an AI that was only 90% accurate in making a cancer diagnosis, which is a far riskier task. 

Stanton stressed that the ideas in the publication are based on background research, and that they would benefit from public scrutiny.

“We are proposing a model for AI user trust,” he said. “It is all based on others’ research and the fundamental principles of cognition. For that reason, we would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.”

Commenters may provide feedback on the draft document by downloading the comment response form and emailing it to aiusertrustcomments@nist.gov. For more information, please visit NIST’s page on AI User Trust.

Motorola splashes $445m on AI video analysis specialist

960 640 Stuart O'Brien

VaaS International Holdings (VaaS) has been acquired by Motorola Solutions for $445 million.

The deal, which includes a combination of cash and equity, will see VaaS, a ‘video analysis as a service’ provider, provide Motorola with global data image analytics resource for vehicle location.

VaaS’s image capture and analysis platform, which includes fixed and mobile license plate reader cameras driven by machine learning and artificial intelligence, provides vehicle location data to public safety and commercial customers.

Its subsidiaries include Vigilant Solutions for law enforcement users and Digital Recognition Network (DRN) for commercial customers. The company’s 2019 revenues are expected to be approximately $100 million.

Greg Brown, chairman and CEO, Motorola Solutions, said: “Automated license plate recognition is an increasingly powerful tool for law enforcement.

“VaaS will expand our command centre software portfolio with the largest shareable database of vehicle location information that can help shorten response times and improve the speed and accuracy of investigations.”

VaaS’s platform also enables controllable, audited data-sharing across multiple law enforcement agencies. Vehicle location information can help accelerate time to resolution and improve outcomes for public safety agencies, particularly when combined with police records. For example, law enforcement has used VaaS’ solutions to quickly apprehend dangerous suspects and find missing persons.

“We are very excited to be joining Motorola Solutions,” said Shawn Smith, co-founder of VaaS and president of Vigilant Solutions.

“This acquisition enables us to continue to serve our existing customers and expand our footprint globally, while at the same time supporting a company with a commitment to innovation and growth, guided by a common purpose that aligns with our mission and culture: ‘To help people be their best in the moments that matter.’ It doesn’t get any better than that.”

Human decision making still the most trusted method in cybersecurity

960 640 Stuart O'Brien

A report aggregating insight from more then 400 interviews with leading cybersecurity researchers and security experts on Artificial Intelligence (AI), Machine Learning (ML) and Non-Malware Attacks has found that 87 per cent of those polled still don’t trust AI or ML to replace human decision making in security.

Commissioned by endpoint security specialists Carbon Black, the report also revealed the following trends:

  • 93 per cent of cybersecurity researchers said non-malware attacks pose more of a business risk than commodity malware attacks.
  • 64 per cent of cybersecurity researchers said they’ve seen an increase in non-malware attacks since the beginning of 2016. There non-malware attacks are increasingly leveraging native system tools, such as WMI and PowerShell, to conduct nefarious actions, researchers reported.
  •  AI is considered by most cybersecurity researchers to be in its nascent stages and not yet able to replace human decision making in cybersecurity. 87 per cent of the researchers said it will be longer than three years before they trust AI to lead cybersecurity decisions.
  •  74 per cent of researchers said AI-driven cybersecurity solutions are still flawed.
  •  70 per cent of cybersecurity researchers said ML-driven security solutions can be bypassed by attackers. 30 per cent said attackers could “easily” bypass ML-driven security.
  •  Cybersecurity talent, resourcing and trust in executives continue to be top challenges plaguing many businesses.

“Based on how cybersecurity researchers perceive current AI-driven security solutions, cybersecurity is still very much a ‘human vs. human’ battle, even with the increased levels of automation seen on both the offensive and defensive sides of the battlefield,” said Carbon Black Co-founder and Chief Technology Officer, Michael Viscuso. “And, the fault with machine learning exists in how much emphasis organisations may be placing on it and how they are using it. Static, analysis-based approaches relying exclusively on files have historically been popular, but they have not proven sufficient for reliably detecting new attacks. Rather, the most resilient ML approaches involve dynamic analysis – evaluating programmes based on the actions they take.”

In addition to key statistics from the research, the report also includes a timeline of notable non-malware attacks, recommendations for incorporating AI and ML into cybersecurity programs and an ‘In Their Own Words’ section, which includes direct quotes from cybersecurity researchers and unique perspectives on the evolution of non-malware attacks.

“Non-malware attacks will become so widespread and target even the smallest business that users will become familiar with them,” said one cybersecurity researcher. “Most users seem to be familiar with the idea that their computer or network may have accidentally become infected with a virus, but rarely consider a person who is actually attacking them in a more proactive and targeted manner.”

www.carbonblack.com