• Covid-19 – click here for the latest updates from Forum Events & Media Group Ltd

Posts Tagged :

Artificial Intelligence

NIST attempts evaluation of user trust in AI

960 640 Stuart O'Brien

How do humans decide whether or not to trust a machine’s recommendations? This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems.

The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems.

According to NIST’s Brian Stanton, the issue is whether human trust in AI systems is measurable — and if so, how to measure it accurately and appropriately. 

“Many factors get incorporated into our decisions about trust. It’s how the user thinks and feels about the system and perceives the risks involved in using it.”

Stanton, a psychologist, co-authored the publication with NIST computer scientist Ted Jensen. They largely base the document on past research into trust, beginning with the integral role of trust in human history and how it has shaped our cognitive processes. They gradually turn to the unique trust challenges associated with AI, which is rapidly taking on tasks that go beyond human capacity. 

“AI systems can be trained to ‘discover’ patterns in large amounts of data that are difficult for the human brain to comprehend. A system might continuously monitor a very large number of video feeds and, for example, spot a child falling into a harbor in one of them,” Stanton said. “No longer are we asking automation to do our work. We are asking it to do work that humans can’t do alone.”

The NIST publication proposes a list of nine factors that contribute to a person’s potential trust in an AI system. These factors are different than the technical requirements of trustworthy AI that NIST is establishing in collaboration with the broader community of AI developers and practitioners. The paper shows how a person may weigh the factors described differently depending on both the task itself and the risk involved in trusting the AI’s decision.

One factor, for example, is accuracy. A music selection algorithm may not need to be overly accurate, especially if a person is curious to step outside their tastes at times to experience novelty — and in any case, skipping to the next song is easy. It would be a far different matter to trust an AI that was only 90% accurate in making a cancer diagnosis, which is a far riskier task. 

Stanton stressed that the ideas in the publication are based on background research, and that they would benefit from public scrutiny.

“We are proposing a model for AI user trust,” he said. “It is all based on others’ research and the fundamental principles of cognition. For that reason, we would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.”

Commenters may provide feedback on the draft document by downloading the comment response form and emailing it to aiusertrustcomments@nist.gov. For more information, please visit NIST’s page on AI User Trust.

UK creates ‘AI Council’ to boost corporate artificial intelligence usage

960 640 Stuart O'Brien

Leaders from business, academia and data privacy organisations have joined an independent expert committee created to help boost growth of AI in UK business.

The line-up includes online-only retailer Ocado’s Chief Technology Officer, Paul Clarke, Dame Patricia Hodgson, Board Member of the Centre for Data Ethics and Innovation and The Alan Turing Institute Chief Executive, Professor Adrian Smith.

Other representatives are AI for good founder Kriti Sharma, UKRI chief executive Mark Walport and Founding Director of the Edinburgh Centre for Robotics Professor David Lane.

The government says AI Council members are already leading the way in the development of AI – from using it to personalise the shopping experience of Ocado orders while predicting demand, detecting fraud and keeping consumers safe, to the Alan Turing Institute building on the great British pioneer’s legacy by identifying and overcoming barriers of AI adoption in society, such as skills, consumer trust and ensuring the protection of sensitive data.

These experts, it says, will help put in place the right skills, ethics and data so the UK can make the most of AI technologies.

Digital Secretary Jeremy Wright said: “Britain is already a leading authority in AI. We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector and attracting the best global tech talent, but we must not be complacent.

“Through our AI Council we will continue this momentum by leveraging the knowledge of experts from a range of sectors to provide leadership on the best use and adoption of artificial intelligence across the economy.

“Under the leadership of Tabitha Goldstaub the Council will represent the UK AI Sector on the international stage and help us put in place the right skills and practices to make the most of data-driven technologies.”

Overseeing the council and advising the Government on how to work with and encourage businesses and organisations on how to boost their use of AI is Tabitha Goldstaub, co-founder of Cognition X, an online platform which provides companies with information and access to AI experts to boost their businesses. Goldstaub also runs CogX, one of the largest gatherings of AI experts in the world.

“I’m thrilled the AI Council membership has been announced, convening a brilliant mix of experts who have agreed to offer their time, experience and insight to support the growth and responsible adoption of AI in the UK,“ said Goldstaub.

“We are to grasp the full benefits of AI technologies it is vital all of the AI community comes together and works with the AI Council to create an open dialogue between industry, academia and the public sector, so we can see social and economic benefits for all of society.”

The intention is for the AI Council to cultivate and encourage a much wider representation of experts to focus on specific topics which will initially include, but not limited to, data & ethics, adoption, skills and diversity. This will allow the broader AI community to work together to drive towards solutions and engage in making the UK a leader in the AI and data revolution.

Motorola splashes $445m on AI video analysis specialist

960 640 Stuart O'Brien

VaaS International Holdings (VaaS) has been acquired by Motorola Solutions for $445 million.

The deal, which includes a combination of cash and equity, will see VaaS, a ‘video analysis as a service’ provider, provide Motorola with global data image analytics resource for vehicle location.

VaaS’s image capture and analysis platform, which includes fixed and mobile license plate reader cameras driven by machine learning and artificial intelligence, provides vehicle location data to public safety and commercial customers.

Its subsidiaries include Vigilant Solutions for law enforcement users and Digital Recognition Network (DRN) for commercial customers. The company’s 2019 revenues are expected to be approximately $100 million.

Greg Brown, chairman and CEO, Motorola Solutions, said: “Automated license plate recognition is an increasingly powerful tool for law enforcement.

“VaaS will expand our command centre software portfolio with the largest shareable database of vehicle location information that can help shorten response times and improve the speed and accuracy of investigations.”

VaaS’s platform also enables controllable, audited data-sharing across multiple law enforcement agencies. Vehicle location information can help accelerate time to resolution and improve outcomes for public safety agencies, particularly when combined with police records. For example, law enforcement has used VaaS’ solutions to quickly apprehend dangerous suspects and find missing persons.

“We are very excited to be joining Motorola Solutions,” said Shawn Smith, co-founder of VaaS and president of Vigilant Solutions.

“This acquisition enables us to continue to serve our existing customers and expand our footprint globally, while at the same time supporting a company with a commitment to innovation and growth, guided by a common purpose that aligns with our mission and culture: ‘To help people be their best in the moments that matter.’ It doesn’t get any better than that.”

Human decision making still the most trusted method in cybersecurity

960 640 Stuart O'Brien

A report aggregating insight from more then 400 interviews with leading cybersecurity researchers and security experts on Artificial Intelligence (AI), Machine Learning (ML) and Non-Malware Attacks has found that 87 per cent of those polled still don’t trust AI or ML to replace human decision making in security.

Commissioned by endpoint security specialists Carbon Black, the report also revealed the following trends:

  • 93 per cent of cybersecurity researchers said non-malware attacks pose more of a business risk than commodity malware attacks.
  • 64 per cent of cybersecurity researchers said they’ve seen an increase in non-malware attacks since the beginning of 2016. There non-malware attacks are increasingly leveraging native system tools, such as WMI and PowerShell, to conduct nefarious actions, researchers reported.
  •  AI is considered by most cybersecurity researchers to be in its nascent stages and not yet able to replace human decision making in cybersecurity. 87 per cent of the researchers said it will be longer than three years before they trust AI to lead cybersecurity decisions.
  •  74 per cent of researchers said AI-driven cybersecurity solutions are still flawed.
  •  70 per cent of cybersecurity researchers said ML-driven security solutions can be bypassed by attackers. 30 per cent said attackers could “easily” bypass ML-driven security.
  •  Cybersecurity talent, resourcing and trust in executives continue to be top challenges plaguing many businesses.

“Based on how cybersecurity researchers perceive current AI-driven security solutions, cybersecurity is still very much a ‘human vs. human’ battle, even with the increased levels of automation seen on both the offensive and defensive sides of the battlefield,” said Carbon Black Co-founder and Chief Technology Officer, Michael Viscuso. “And, the fault with machine learning exists in how much emphasis organisations may be placing on it and how they are using it. Static, analysis-based approaches relying exclusively on files have historically been popular, but they have not proven sufficient for reliably detecting new attacks. Rather, the most resilient ML approaches involve dynamic analysis – evaluating programmes based on the actions they take.”

In addition to key statistics from the research, the report also includes a timeline of notable non-malware attacks, recommendations for incorporating AI and ML into cybersecurity programs and an ‘In Their Own Words’ section, which includes direct quotes from cybersecurity researchers and unique perspectives on the evolution of non-malware attacks.

“Non-malware attacks will become so widespread and target even the smallest business that users will become familiar with them,” said one cybersecurity researcher. “Most users seem to be familiar with the idea that their computer or network may have accidentally become infected with a virus, but rarely consider a person who is actually attacking them in a more proactive and targeted manner.”

www.carbonblack.com