Posts Tagged :

Artificial Intelligence

How Artificial Intelligence is enhancing physical security in the UK’s commercial and public sectors

960 640 Stuart O'Brien

Security threats are growing in complexity, demanding advanced solutions. Artificial Intelligence (AI) is increasingly playing a pivotal role in reshaping the security landscape within the UK’s commercial and public sectors. By integrating AI, organisations can bolster their physical security measures to unprecedented levels. Let’s delve into the specific ways AI is being leveraged…

  1. Intelligent Video Surveillance:
    • Function: Traditional security cameras merely record, but AI-powered cameras can analyse. These systems recognise suspicious behaviour, unattended packages, or individuals in restricted zones.
    • Benefit: This proactive approach ensures real-time alerts and rapid response, drastically minimising potential security breaches.
  2. Facial Recognition:
    • Function: AI algorithms can scan, recognise, and match faces in crowded places or against a database in real-time.
    • Benefit: Essential for public sectors like airports or railway stations, this technology helps identify wanted criminals or missing persons instantly, while commercial establishments can spot potential threats or trespassers.
  3. Number Plate Recognition:
    • Function: AI systems automatically recognise vehicle number plates, a crucial tool at parking facilities, checkpoints, or border controls.
    • Benefit: This can be used to detect stolen vehicles, track movements of suspicious cars, or automate toll collections.
  4. Predictive Analysis:
    • Function: By analysing historical data, AI predicts potential future security threats or vulnerable points within a facility.
    • Benefit: Organisations can then take proactive measures, either by strengthening security in identified areas or being on high alert during vulnerable times.
  5. Drone Patrols:
    • Function: AI-driven drones equipped with cameras can patrol large areas, offering aerial surveillance, especially in hard-to-reach zones.
    • Benefit: Essential for large commercial compounds, public parks, or border areas, these drones ensure comprehensive monitoring, often faster and more efficiently than human patrols.
  6. Voice Recognition:
    • Function: AI systems can identify and authenticate individuals based on their unique voice prints.
    • Benefit: This offers an additional layer of security, particularly in sensitive commercial sectors like banking or data centres, ensuring access only to authorised personnel.
  7. Behavioural Analytics:
    • Function: Beyond just facial recognition, AI can analyse body language, gaits, or behavioural patterns to detect suspicious activity.
    • Benefit: This nuanced level of detection, especially in crowded public areas, can help spot potential threats based on behaviour rather than identity.
  8. Integration with Digital Security:
    • Function: AI seamlessly integrates physical security with digital security protocols, offering a holistic security framework.
    • Benefit: This ensures that breaches in the digital realm, like hacking into a surveillance system, are detected and counteracted in the physical world.

The union of AI with physical security offers a robust, proactive, and adaptive approach to safety, tailored to the UK’s commercial and public sectors’ unique challenges. As AI technology advances further, its integration will only become more seamless, heralding a new era of security consciousness and efficacy.

Learn more about the use of AI in physical security at the Total Security Summit.

Image by Gerd Altmann from Pixabay

Study show generative AI now an ’emerging risk’ for enterprise

960 640 Stuart O'Brien

The mass availability of generative AI, such as OpenAI’s ChatGPT and Google Bard, became a top concern for enterprise risk executives in the second quarter of 2023.

“Generative AI was the second most-frequently named risk in our second quarter survey, appearing in the top 10 for the first time,” said Ran Xu director, research in the Gartner Risk & Audit Practice. “This reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases, and therefore potential risks, that these tools engender.”

In May 2023, Gartner surveyed 249 senior enterprise risk executives to provide leaders with a benchmarked view of 20 emerging risks. The Quarterly Emerging Risk Reports includes detailed information on the possible impact, time frame, level of attention, and perceived opportunities for these risks.

Third-party viability was the top fast-emerging risk that organizations are monitoring most closely in the 2Q23 survey (see Table 1). Financial planning uncertainty was the third ranked risk, followed by cloud concentration risk. China trade tensions rounded out the top five risks that were split between issues symptomatic of the current broad macroeconomic and geopolitical volatility, and technology-related concerns.

Mass Generative AI Availability
Gartner has previously identified six risks of generative AI and four areas of AI regulation that are relevant to assurance functions. In terms of managing enterprise risk, three main aspects must be addressed, according to Gartner experts:

  • Intellectual Property
    “Information entered into a generative AI tool can become part of its training set, meaning that sensitive or confidential information could end up in outputs for other users,” said Xu. “Moreover, using outputs from these tools could well end up inadvertently infringing the intellectual property rights of others who have used it.”

    It’s important to educate corporate leadership on the necessity for caution and transparency around the use of such tools so that intellectual property risks can be properly mitigated both in terms of input and output from generative AI tools.

  • Data Privacy
    Generative AI tools may possibly share user information with third parties, such as vendors or service providers, without prior notice. This has the potential to violate privacy law in many jurisdictions. For example, regulation has already been implemented in China and the EU, with proposed regulations emerging in USA, Canada, India and UK among others.
  • Cybersecurity
    “Hackers are always testing new technologies for ways to subvert it for their own ends, and generative AI is no different,” said Xu. “We’ve seen examples of malware and ransomware code that generative AI has been tricked into producing, as well as ‘prompt injections’ attacks that can trick these tools into giving away information they should not. This is leading to the industrialization of advanced phishing attacks.”“Persistent inflation that is less responsive to interest rate rises and contuse longer than anticipated has escalated costs and margin pressures on third parties,” said Xu. “As central banks increase interest rates to fight inflation, this also brings about a process of credit tightening that may force suppliers to suspend operations or become insolvent as borrowing costs rise.”

If economic conditions deteriorate broadly, this may cause an unexpected drop in demand that could affect vendor viability or their ability to provide goods and services in a timely manner. Gartner experts identified three potential third-party viability consequences for risk managers to monitor as the situation develops:

  • Loss of Key Inputs and Materials: If third-parties are increasing their prices due to the wider economic situation there is a clear risk of losing access to key inputs and materials as third parties would favour customers willing to pay higher prices.
  • Flawed Financial Planning Assumptions: Cost assumptions will be rendered invalid as suppliers increase prices or fail, necessitating switching costs and increased prices for obtaining goods and services.
  • Challenges Outside the Supply Chain: Partners, such as managed service providers or commercial partners, creditors, or technology vendors may cease or curtail operations.

Could AI-generated ‘synthetic data’ be about to take off in the security space?

960 640 Stuart O'Brien

Synthetic data startups are spearheading a revolution in artificial intelligence (AI) by redefining the landscape of data generation that will have implications for myriad industries, including security.

That’s according to GlobalData, which says substantial venture capital investments and a clear sense of direction, these startups are transforming industries, overcoming data limitations, and propelling AI innovation to new heights,

Kiran Raj, Practice Head of Disruptive Tech at GlobalData, said: “Synthetic data startups are breaking through the shackles of data quality and regulation, becoming the trusty substitutes for AI training. As the demand for reliable, cost-effective, time-efficient, and privacy-preserving data continues to accelerate, startups envision a future powered by synthetic data, ushering a new era of machine learning progress. The continuous exploration and innovation in this space promise exciting opportunities and transformative impact on AI development in the years to come.”

Shagun Sachdeva, Project Manager of Disruptive Tech at GlobalData, added: “The bullish investment landscape, expanding use cases across industries, and the ongoing AI advancements flowing to downstream tasks signify that we are merely scratching the surface of what synthetic data can truly achieve. Ranging from financial services and healthcare to automotive and retail sectors, GlobalData expects more remarkable innovations and transformative impacts across industries in the realms of synthetic data, which bodes well for the startups working in the space.”

GlobalData’s Innovation Radar report, Startup Series – Synthetic Data – The Master Key to AI’s Future, highlights the dynamic application landscape of synthetic data by startups across sectors.

Healthcare

Synthetic data in healthcare enables privacy-preserving research, improves AI model training by augmenting real patient data, and supports simulation and training for medical professionals. It also aids in drug discovery, clinical trials, and optimizing healthcare systems for enhanced patient care. Aindo, Betterdata, and Gretel are some of the synthetic data startups addressing the needs of healthcare sector.

Financial services

Synthetic data offers significant advantages in financial services, including fraud detection, customer analytics, regulatory compliance, portfolio management, cybersecurity, and chatbot training. By harnessing the power of synthetic data, financial institutions can enhance operational efficiency, mitigate risks, personalize services, and drive innovation in a privacy-conscious manner. Clearbox, Hazy, and Diveplane are some of the synthetic data startups that offer solutions for financial service sector.

Automotive

Synthetic data plays a vital role in the automotive sector, particularly in autonomous vehicle development, virtual testing, driver assistance systems, design optimization, HMI development, and traffic simulation. By leveraging synthetic data, automotive companies can accelerate innovation, improve safety, optimize manufacturing processes, and enhance the overall driving experience. Rendered AI, Anyverse, and Sky Engine AI are some of the synthetic data startups catering the needs of automotive sector.

Retail

Synthetic data in the retail sector enables accurate demand forecasting, personalized marketing, optimized pricing, improved store layouts, fraud detection, and enhanced customer service. By leveraging synthetic data, retailers can make data-driven decisions, enhance customer experiences, and optimize operations for business growth. Betterdata, Zumo Labs, and Synthesis AI are some of the synthetic data startups that offer solutions for retail sector.

Security

It’s not difficult to see how applications of the above in the security space could have a significant impact, not just in terms of data security, but also in planning crowd control scenarios or training.

Sachdeva concluded: “Despite the considerable attention and substantial investment in synthetic data, user skepticism, dependency on real data, and lack of standards, trust and awareness can hinder the acceptance. As we closely monitor this evolving landscape, it will be interesting to watch startups within synthetic data space addressing these challenges and offering solutions that will mold the future trajectory of AI.”

Image by Brian Merrill from Pixabay

The transformative impact of AI on physical security

960 640 Stuart O'Brien

Artificial Intelligence (AI) is revolutionising various sectors, and one area where its impact is becoming increasingly significant is in the realm of physical security. From surveillance systems to access control, AI-powered solutions are reshaping how we safeguard our physical spaces…

Enhanced Surveillance

One of the primary applications of AI in physical security is in surveillance systems. Traditional closed-circuit television (CCTV) cameras are being upgraded with AI algorithms that enable intelligent video analysis. AI-powered surveillance systems can detect and track suspicious activities, unauthorised access, and unusual behavior patterns. Through real-time monitoring and analysis, these systems can provide early warning alerts, minimising response times and enhancing overall security effectiveness.

Facial Recognition and Access Control

AI has also ushered in advancements in access control systems through the implementation of facial recognition technology. Facial recognition algorithms can accurately identify individuals, granting or denying access based on predetermined criteria. This technology enhances security by eliminating the vulnerabilities associated with lost or stolen access cards or passwords. It also allows for efficient management of access permissions, making it easier to track and control entry to restricted areas.

Predictive Analytics

By leveraging AI and machine learning algorithms, physical security systems can now employ predictive analytics to assess potential threats and vulnerabilities. These systems analyse vast amounts of data, including historical patterns, weather conditions, and social media feeds, to identify potential risks. With predictive analytics, security personnel can proactively respond to emerging threats and allocate resources more effectively, thereby preventing security breaches.

Autonomous Security Robots

AI-driven autonomous security robots are another noteworthy innovation in physical security. Equipped with sensors, cameras, and AI algorithms, these robots can patrol and monitor large areas autonomously, relieving security personnel of routine tasks. They can detect intrusions, collect real-time data, and transmit information to a central control center. These robots not only augment security capabilities but also provide a visible deterrent to potential wrongdoers.

Intelligent Incident Response

When security incidents occur, AI can play a vital role in expediting response times and minimising damage. AI algorithms can analyse data from multiple sources, such as security cameras, alarms, and sensors, to identify the nature and severity of an incident. This real-time analysis enables security personnel to make informed decisions promptly, facilitating swift responses and appropriate deployment of resources.

The integration of AI into physical security systems is revolutionising how we protect our physical spaces. From enhanced surveillance and facial recognition to predictive analytics and autonomous security robots, AI brings numerous benefits to the field of physical security. By leveraging the power of AI, organisations can bolster their security measures, mitigate risks, and respond more effectively to threats.

However, it is essential to address ethical considerations and ensure the responsible and transparent use of AI in physical security to strike the right balance between safety and privacy. As AI continues to evolve, we can expect even more innovative applications and advancements that will shape the future of physical security for the better.

Image Credit

https://pixabay.com/photos/ai-generated-science-fiction-robot-7718658/

https://claudeai.uk/ai-blog/

AI set for dramatic growth as security applications proliferate

960 640 Stuart O'Brien

The global artificial intelligence (AI) market is forecast to grow at a compound annual growth rate (CAGR) of 21.4%, from $81.3 billion in 2022 to $383.3 billion in 2030, driven by applications such as facial recognition.

GlobalData’s latest thematic report, “Artificial Intelligence,” reveals that the explosion in the volume of sensor data, coupled with the increased sophistication of advanced deep learning models, the emergence of generative AI, and the availability of chips created specifically for AI processes, will all drive growth in AI over the coming years.

Josep Bori, Research Director at GlobalData Thematic Intelligence, said: “Despite the hype, artificial general intelligence (AGI), or the ability of machines to do anything that a human can and possess consciousness, is still decades away. However, ‘good enough’ AI is already here, capable of interacting with humans, motion, and making decisions. For example, OpenAI’s GPT-3 and ChatGPT models can write original prose and chat with human fluency, DeepMind’s algorithms can beat the best human chess players, and Boston Dynamics’ Atlas robots can somersault. If this evolution continues, it could upend the labor-based capitalist economic model.”

Driven by ethical and political concerns, using AI for facial recognition will lead to conflict in standards and regulatory approaches. This will lead to the break-up of the global supply chain in the AI segment, as is already underway in semiconductors. Ultimately stricter ethical regulation will break the global AI market into geopolitical silos, in isolation from one another.

Bori continued: “The most advanced AI technologies, such as computer vision and generative language models, rely on powerful AI chips. As such, the ongoing US‑China trade dispute, which has led to the US prohibiting exports to China of either AI chips or the tools to manufacture them, will disrupt the competitive landscape. China will lose its AI market dominance unless it can secure access to advanced chip manufacturing technology.“

The ongoing trade dispute between China and the US has negative implications for the global progress of AI technologies. However, China will play a leading role in AI due to its leadership in AI software and IoT technology and its progress in low-end chip manufacturing.

Bori concluded: “Unless China solves its access to extreme ultraviolet (EUV) lithography technology, currently indirectly prevented by the US sanctions, and can manufacture more powerful and miniaturized chips (i.e., on 5 and 3 nanometer nodes), it will struggle in AI in the data center and related fields such as CV.”

Will AI make us more secure?

960 640 Guest Blog

By Monica Oravcova, COO & Co-Founder, Naoris Protocol

ChatGPT, the dialogue-based AI chatbot capable of understanding natural human language, has become another icon in the disruptor ecosystem. Gaining over 1 million registered users in just 5 days, it has become the fastest growing tech platform ever.

ChatGPT generates impressively detailed human-like written text and thoughtful prose, following a text input prompt. In addition, ChatGPT can write and hack code which is a potential major headache from an infosec point of view and has set the Web3 community on fire. They are reeling from the implications and the sheer ingenuity of this AI Chatbot that can analyse code and detect vulnerabilities in seconds – : https://twitter.com/gf_256/status/1598104835848798208

The Naoris Protocol POV:

Following the hype around ChatGPT, the race is now on between OpenAI’s Chat GPT and Google’s LaMDA to be the market leading NLP search tool for users and corporations moving forward. OpenAI is a newbie with $1B in funding and a $20B valuation, as opposed to Google’s towering $281B revenue. However, Google must rapidly innovate and adapt or risk being left behind, an example being TikTok and Meta, with the short format of TikTok leading the zeitgeist to become the most downloaded app, beating Facebook in 2022. Google is taking this seriously, having announced a ‘code red’ to develop a new AI based search engine product to counter OpenAI’s land grab. Ironically, ChatGPT uses the same conversational AI platform developed by Google’s engineers in 2017.

How this will affect cybersecurity in the future is unknown, but there are some assumptions.

In the long term, this will be a net positive for the future of cyber security if the necessary checks and balances are in place. In the short term, AI will expose vulnerabilities which will need to be addressed, as we could see a potential spike in breaches.

AI that writes and hacks code could spell trouble for enterprises, systems and networks. Current cybersecurity is already failing with exponential rises in hacks across every sector, with 2022 reportedly already 50% up on 2021.

With AI maturing, the use cases can be positive for the enterprise security and development workflow, which will increase the defence capabilities above the current (existing) security standards. Naoris Protocol utilises Swarm AI as part of its breach detection system which monitors all networked devices and smart contracts in real time.

  • AI can help organisations improve their cybersecurity defences by enabling them to better detect, understand and respond to potential threats. AI can also help organisations respond to and recover from cyberattacks more quickly and effectively by automating tasks such as incident response and investigation. It can free up human resources to focus on more high-level, strategic tasks.
  • By analysing large volumes of data and using advanced machine learning algorithms, AI could (in the future) identify patterns and trends that may indicate a cyberattack is imminent, allowing organisations to take preventative measures before an attack occurs, minimising the risk of data breaches and other cyber incidents.
  • The adoption of AI could help organisations stay one step ahead of potential attacks and protect their sensitive data and systems. By integrating AI into an organisation’s production pipeline to create smarter and more robust code, with developers instructing AI to, write, generate and audit (existing programming) the code.
  • AI currently cannot replace developers as it cannot understand all of the nuances of systems (and business logic) and how they work together. Developers will still need to read and critique the AIs output, learning patterns, looking for weak spots. AI will positively impact the CISO and IT team’s ability to monitor in real time. Security budgets will be reduced, cybersecurity teams will also reduce in numbers. Only those who can work with and interpret AI will be in demand.

However, bad actors can increase the attack vector, working smarter and a lot quicker by instructing AI to look for exploits and vulnerabilities within existing code infrastructure. The cold hard truth could mean that thousands of platforms and smart contracts could suddenly become exposed leading to a short term rise in cyber breaches.

  • As ChatGPT and LaMDA are reliant on large amounts of data to function effectively, if the data used to train these technologies is biassed or incomplete, it could lead to inaccurate or flawed results, e.g. Microsoft’s TAY AI turned evil within hours. Naoris Protocol uses Swarm AI only to monitor the metadata of the known operational baselines of devices and systems, ensuring they have not been tampered with in any way. Therefore, the Naoris Protocol AI only detects behavioural changes to devices and networks, referencing known industry baselines (OS & firmware updates etc) rather than learning and forming decisions based upon diverse individual opinions.
  • Another issue is that AI is not foolproof and can still be vulnerable to cyberattacks or other forms of manipulation. This means that organisations need to have robust security measures in place to protect these technologies and ensure their integrity.
  • It is also important to consider the potential ethical implications of using ChatGPT and LaMDA for cybersecurity. For example, there may be concerns about privacy and the use of personal data to train these technologies, or about the potential for them to be used for malicious purposes. However, Naoris Protocol only monitors metadata and behavioural changes in devices and smart contracts, and not any kind of personal Identifiable Information (PII).

Conclusion

AI will require enterprises to up their game. They will have to implement and use AI services within their security QA workflow processes prior to launching any new code / programmes. AI is not a human being. It will miss basic preconceptions, knowledge and subtleties that only humans see. It is a tool that will improve vulnerabilities that are coded in error by humans. It will seriously improve the quality of code across all web2 and web3 organisations. The current breach detection time as measured by IBM (IBM’s 2020 Data security report) is up to 280 on average. Using AI systems like Naoris Protocol’s cybersecurity solution as part of an enterprise defence in depth posture, breach detection times can be reduced to less than 1 second, which changes the game.

It is worth noting that AI is a relatively new technology and still being developed and refined, therefore we can never 100% trust its output. Human developers will always be needed to ensure that code is robust meeting an organisation’s business requirements. However, AI is being used by both sides – from good and bad actors in an effort to give them the edge. With regulation working several years behind technology, we need organisations to implement a cyber secure mentality across their workforces in order to combat the increasing number of evolving hacks. The genie is now out of the bottle and if one side isn’t using the latest technology, they’re going to be in a losing position. So if there’s an offensive AI out there, enterprises will need the best AI tool to defend themselves with. It’s an arms race as to who’s got the best tool.

Could AI-generated inventions soon be patented in the UK, and how will this impact businesses?

960 640 Guest Blog

With the rapid advancements in artificial intelligence technology, how are AI-generated inventions recognised when it comes to patents? Innovation funding and Patent Box experts ABGI UK look into where inventions created by AI systems currently stand in regards to intellectual property, and how potential changes will affect UK businesses…

As artificial intelligence becomes increasingly advanced, how is AI-generated innovation considered when it comes to intellectual property?

The issue is more pertinent than ever following the case earlier this year of Thaler v Comptroller General of Patents, Trade Marks and Designs. After Dr Stephem Thaler submitted two patents naming his AI machine “DABUS” as the inventor, the UK Intellectual Property Office withdrew the patents, citing that the machine did not meet the necessary criteria for an inventor. When taken to the UK Court of Appeal, the Court backed the IPO’s decision.

So why does the issue of AI in relation to patents continue to be a point of contention?

IPO Consultation is Launched: Can Artificial Intelligence Hold Ownership of New Patents?

In the conclusion of the case, the Court acknowledged that the law on inventorship continues to change and the Court remains open to further development. The Intellectual Property Office (IPO) subsequently launched a consultation into the issue on the 29th of October, stating  that “Artificial intelligence (AI) is playing an increasing role in both technical innovation and artistic creativity. Patents and copyright must provide the right incentives to AI development and innovation, while continuing to promote human creativity and innovation.” In other words, the government recognises that patent limitations on AI-generated inventions could hinder UK businesses and individuals, and is reviewing their treatment of AI in copyright and patents legislation to seek a balanced solution.

How Long Until the UK Names an AI System as Inventor on a Patent?

Concurrent to the investigation into patent protection for AI-devised inventions, the National AI Strategy was published this September, making the government’s ambition to become a global leader in artificial intelligence clear.

With the government keen to push AI and machine learning across UK industry sectors, the legal framework surrounding intellectual property rights such as patents could need to be adjusted to suit the changing scenario and reflect that the concept of “creations of the mind” may no longer apply exclusively to the inventions of humans.

Countries such as South Africa have recently granted successful patents to artificial intelligence systems; the recent IPO consultation confirms that the UK is determined not to be left behind in the technological race, and therefore changes to the UK Patents Act 1977 may occur sooner rather than later.

How Might This Change Impact Innovative UK Businesses?

One of the main ways in which a change in UK regulation regarding AI-held patents would positively impact UK businesses would be in regards to Patent Box eligibility.

The Patent Box regime was introduced in reaction to the relatively low number of patent applications submitted in the UK annually compared to many other countries, providing an incentive for UK companies to formalise the IP generated from UK-based R&D and commercialise their IP, repatriating the economic benefits back into the UK.

Aiming to increase the level of patenting of UK-developed IP and ensure that new and existing patents are developed in, manufactured and sold from the UK, the UK’s patent box regime is among the most favourable in the world. Profits earned from patents and intellectual property rights under the Patent Box regime benefit from a reduced tax rate of just 10%; with the imminent increase in the standard corporation tax rate in the UK from 19% to 25% in 2023, the tax advantage of Patent Box becomes even more significant.

If the change in legislation regarding AI-generated patents comes into effect, IP-protected AI innovation will also be eligible for Patent Box, creating the potential for huge savings on profits generated from AI-generated inventions.

UK companies should ensure all their intellectual property is structured to take advantage of Patent Box with immediate effect, including investigating AI creations for a potential shift in patent legislation – but what does this involve?

Get Ready For Change and Plan Ahead

  • Make sure you conduct IP reviews at regular intervals. For each element of IP considered for protection, establish a cost/benefit comparison to decide whether or not it’s worth protecting.
  • Review your R&D plans to establish whether any of your products, services or processes could be patented to receive the benefit of the Patent Box regime both now and in the case of a future reform.
  • Educate everyone involved in R&D about the importance of IP protection and the risks related to data leakage.
  • Keep a laboratory notebook recording R&D progress to prove precedence in the case of competing patent cases.
  • Look into Patent Box eligibility even if your patent is pending. Companies with pending patent applications can also qualify retrospectively for the 10% rate once the patent is granted, but the Company has to elect into the Scheme for the accounting period in which profits are generated. If companies are submitting patent applications to the UK IPO and inform the IPO that they are intending using the Patent Box scheme to improve the business benefits of the commercialisation, some patent attorneys believe the IPO will give the application preferential treatment and speed up the patent grant process – so by no means dismiss the idea of Patent Box eligibility until your patent is approved as it could mean missing out on enormous tax reductions.

How Can UK Businesses Elect into the Patent Box Scheme?

The idea behind Patent Box is simple enough, but electing into the scheme can be complex.

Getting advice from a specialist can ensure you’re making the most of the scheme, helping to provide clarity on areas such as the impact of your existing R&D on the calculation of relevant IP income, how to best manage tax benefits when combining R&D tax relief and Patent Box schemes, or on legislative changes and their impact on current or future patent box claims. Receiving guidance here will help identify key areas where new patent applications are needed and provide a clear patent strategy for the business moving forward in regards to areas of potential change such as patents from AI-generated innovation.

From AI to ESG: Key security-technology trends of 2023

960 640 Guest Blog

Johan Paulsson, Chief Technology Officer, Axis Communications, explores the six key technology trends that are set to impact the security sector in the coming year…

Technology is pervasive in every aspect of our personal and work lives. Every new technological development and every upgrade brings new benefits, makes the tools we rely on more effective, and creates stronger, more efficient services. But as technology’s integration into society deepens, awareness of its implications is becoming more heightened.

Ours is an industry making use of increasingly intelligent systems with technology inherently involved in collecting sensitive data. It is also an industry that is as impacted by geopolitical issues affecting international trade as any other sector. Security innovations will absolutely create a smarter, safer world, but in 2023 we will need to evolve to keep pace with these trends – all while moving fast to exploit new technological opportunities.

A move towards actionable insights

“From analytics to action” will become a mantra for 2023. AI and machine learning may have aided the development of advanced analytics in recent years, but the focus moving forward will be on exploiting the actionable insights they deliver.

The huge increase in data being generated by surveillance cameras and sensors is a key driver for this transition. It is impossible for human operators to interpret the nuances of large data sets and act quickly enough, but analytics and AI functionality can now recommend, prompt, and even start to automatically take real-time actions which support safety, security, and operational efficiency in every key vertical.

Analytics can support new methods of post-incident forensic analysis using, for example, assisted search to automatically find desired video among massive silos of camera data. New techniques will also be used to predict outcomes, using sensors to propose preventative maintenance actions to minimise potential industrial outages before failure occurs.

The rise of case-defined hybrid architectures

Advanced analytics can run directly within surveillance cameras on the edge of the network. After-the-fact analysis, though, is a job for on-site servers or the cloud. Building the ultimate data analysis solution demands a hybrid computing architecture – and one which meets a customer’s requirements precisely.

There is no perfect off-the-shelf configuration. Each business must assess its specific use case and define the hybrid solution that will meet its needs. This process is complicated by localised requirements around data privacy and retention, which can force the use of on-premises storage over the convenience of the cloud, but architecture refinements are an essential part of any 2023 technology strategy. Businesses must maintain the flexibility to create the hybrid architecture best suited to their specific needs – architecture which can change as demand and future trends dictate.

Exploiting functions beyond security

Security hardware can present an opportunity to do more. Cameras themselves are powerful sensors capturing both quality video information and, thanks to advanced analytics, metadata which makes them useful in new and novel ways.

Camera metadata can be combined with input from other sensors – monitoring temperature, noise, air and water quality, vibration, weather, and more – to create an advanced sensory network and enable data-driven decisions. While we’re beginning to see this kind of multi-sensor monitoring appearing in industrial and data centre environments, the eventual use cases are limited only by our imaginations – and platform-agnostic data streams enable bespoke applications for any use.

The emergence of cybersecurity sub-trends

In the video surveillance sector, ensuring the authenticity and privacy of every data stream as it moves from camera to cloud to server is essential to maintain trust in its value. Cybersecurity is as vital today as it has always been, but 2023 will see a more proactive approach by technology vendors in identifying vulnerabilities, with bug bounty programs becoming even more commonplace to incentivise external parties to take a white hat approach.

Customers will also increasingly expect transparency regarding the cybersecurity of security solutions, with a Software Bill of Materials (SBOM) becoming standard in assessing software security and risk management.

Sustainability always, climate change at the forefront

Aside from cybersecurity, the requirement for organisations to measure and improve their environmental, societal, and business governance practices remains essential – and all of these aspects will come under increasing scrutiny from customers of security and safety solutions.

Given the extreme conditions of the past year, expect a more acute focus specifically on addressing climate change in 2023. While organisations might make great efforts to reduce emissions from their own operations, these can be undermined if their upstream and downstream value chains are not aligned with the same targets.

Tech companies will also be expected to demonstrate more clearly the ways their products and services support the sustainability goals of their own customers, creating novel and intelligent efficiencies that also help those organisations reduce emissions.

An increased regulatory focus

The compliance goalposts move regularly, and often with great speed. Each new regulation ratified brings a different aspect of software or hardware into focus. The European Commission’s proposed AI Act[1], for example, aims to assign specific risk categories to uses of AI, and will no doubt be the subject of much debate before it becomes law.

But whether in relation to AI, demands surrounding cybersecurity, data privacy, the influence of ‘big tech’, or tech sovereignty, it’s clear that technology companies in the security sector will increasingly need to adhere to more stringent regulations.

Key targets for 2023

2023 is not a year of great upheaval – it’s one of realignment. Our sector’s greatest opportunity continues to come from focusing on commercial success in tandem with our responsibility to address the critical issues facing the planet and its population. By working together towards a common goal, the combination of human inventiveness, advances in technology, and ethical business practices can be combined to make the world a better place.

Learn more about Axis’ trends for 2023.

About The Author
Johan Paulsson is an old hand in the Swedish tech scene, having been COO and head of R&D at Ericsson Mobile, and COO at Anoto. He joined Axis in 2008 and as CTO has overall responsibility for not just its current crop of products, but thinking about what the future might hold, too. Johan got his start with a Masters Degree in Electrical Engineering from the University of Lund in Sweden – and loves the city so much that he never left. He is also a member of the board at poLight, a Norwegian company working to replicate the human eye lens.

[1] https://artificialintelligenceact.eu

Half of IT professionals believe AI poses ‘existential threat to humanity’

960 640 Stuart O'Brien
Artificial intelligence (AI) technology is closer to us than ever before. However, could AI pose a threat to humanity? Well, 49% of IT professionals believe it does. Despite that, many other experts see AI as a companion who helps with various tasks rather than a future enemy.
 
According to data presented by Atlas VPN, nearly three out of four (74%) IT professionals think AI will automate tasks and enable more time to focus on strategic initiatives. About two-thirds (67%) of IT professionals believe that AI will be a mission-critical element of their business strategy in the years to come.
In addition, three out of five (62%) experts expect to work alongside intelligent robots or machines in the next 5 years. On the other hand, some professionals think that AI can also cause harm, as 55% feel that AI will create major data privacy issues.
About half of IT experts believe that AI will put IT jobs at risk and that innovation in AI presents an existential threat to humanity.
Cybersecurity writer at Atlas VPN Vilius Kardelis said: “The AI we have today can benefit businesses by making various tasks easier. However, that does not guarantee it is always positive. AI is a tool with potentially harmful consequences if used in the wrong hands. Despite this, it appears unlikely that it will pose an existential threat to humanity in the near future.”
Of course, many businesses already utilize AI for many different tasks. In the next 2 years, 45% of IT professionals plan to use AI for data analytics. Furthermore, s Also, AI will be used to detect and deter security intrusions and fraud in 40% of surveyed specialists companies in the upcoming years.
One out of three (34%) IT experts plan to use AI for machine learning. Another third (31%) of professionals believe their company will use AI for transferring and cross-referencing data. In addition, 29% of experts see AI helping with web and social media analytics and natural language processing in the next 2 years.

NIST attempts evaluation of user trust in AI

960 640 Stuart O'Brien

How do humans decide whether or not to trust a machine’s recommendations? This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems.

The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems.

According to NIST’s Brian Stanton, the issue is whether human trust in AI systems is measurable — and if so, how to measure it accurately and appropriately. 

“Many factors get incorporated into our decisions about trust. It’s how the user thinks and feels about the system and perceives the risks involved in using it.”

Stanton, a psychologist, co-authored the publication with NIST computer scientist Ted Jensen. They largely base the document on past research into trust, beginning with the integral role of trust in human history and how it has shaped our cognitive processes. They gradually turn to the unique trust challenges associated with AI, which is rapidly taking on tasks that go beyond human capacity. 

“AI systems can be trained to ‘discover’ patterns in large amounts of data that are difficult for the human brain to comprehend. A system might continuously monitor a very large number of video feeds and, for example, spot a child falling into a harbor in one of them,” Stanton said. “No longer are we asking automation to do our work. We are asking it to do work that humans can’t do alone.”

The NIST publication proposes a list of nine factors that contribute to a person’s potential trust in an AI system. These factors are different than the technical requirements of trustworthy AI that NIST is establishing in collaboration with the broader community of AI developers and practitioners. The paper shows how a person may weigh the factors described differently depending on both the task itself and the risk involved in trusting the AI’s decision.

One factor, for example, is accuracy. A music selection algorithm may not need to be overly accurate, especially if a person is curious to step outside their tastes at times to experience novelty — and in any case, skipping to the next song is easy. It would be a far different matter to trust an AI that was only 90% accurate in making a cancer diagnosis, which is a far riskier task. 

Stanton stressed that the ideas in the publication are based on background research, and that they would benefit from public scrutiny.

“We are proposing a model for AI user trust,” he said. “It is all based on others’ research and the fundamental principles of cognition. For that reason, we would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.”

Commenters may provide feedback on the draft document by downloading the comment response form and emailing it to aiusertrustcomments@nist.gov. For more information, please visit NIST’s page on AI User Trust.

  • 1
  • 2