While Generative AI continues to experience rapid growth, dominating headlines and bringing with it a global wave of excitement, turbulence has struck as privacy and transparency concerns send the US and EU into establishing further regulations. This announcement follows Italy temporarily banning ChatGPT, while the US saw its government introduce a code of conduct placing responsibility on software makers rather than consumers. In comparison, the UK government unveiled its AI white paper week – which sets out its approach to encourage growth and protect users. Despite this, experts claim the UK remains a step behind in its approach toward AI regulation. Claire Trachet, cybersecurity expert and CFO of YesWeHack, comments on balancing investment into innovation and regulation.
The AI sector represents a key economic stimulant that is critical for growth as AI developers contributed roughly £3.7bn in value to the UK economy, as well as attracting almost £19bn in private investment through 2022. According to a government report, over 50,000 people work at 3,170 AI geared companies in the UK, which generated a combined £10bn in revenues last year. Despite the growing investment AI brings, fears across the US surrounding AI have seen the government take further steps towards placing the responsibility in terms of safeguarding concerns onto the software makers rather than users.
Experts have also highlighted AI’s capacity to subvert personal privacy and even imitate or steal personal identity. Sceptics and experts alike are calling for pausing the development of AI, including Apple co-founder, Steve Wozniak, and CEO of Tesla, Elon Musk – stating generative AI’s advancement to this extent would be humanity’s ‘biggest existential threat’.
While these concerns appear valid, the UK’s framework sets out principles – rather than new legislation – for existing regulators to use at their disposal. The UK government appears to be prioritising its commitment to ‘unleashing AI’s potential across the economy’, as laid out in the whitepaper. Despite the clear advantages of this, other countries have taken a different approach. Policymakers across the UK, EU and US are soon to define the global AI power dynamics over the next decade, this will rely heavily on the trust it garners with consumers, and its attractiveness to investors.
Claire Trachet, CFO of YesWeHack, comments on AI regulation versus investment and how the onus should lie with CEOs of AI companies:
“In terms of AI, there needs to be a balance in stimulating innovation and mitigating privacy concerns, which means making sure an equal amount of investment goes into both. We know that AI brings in tons of investment, so it’s important that the investment goes into safeguarding and regulations as well as a drive for innovation.
“We need to invest in more ethics, we’re now seeing experts and people like Elon Musk and Steve Wozniak call for a six-month break because there is not enough investment in security or in-depth discussion to navigate these problems. While we have some form of risk management and different reports coming out now, none of them are true coordinated approaches.
“In a time where everything is quite short and mid-term, what we need is the old-school reflection led by philosophy and morals. What we have seen with President Biden’s legislation in the US, making the responsibility lie with the software makers, when you think about it, this should have been the decision from day one. It’s almost as if, the software is built but then when something goes wrong, the responsibility does not lie with the developers, and is instead shifted onto the consumers.
“Looking at AI, we do not have the time that we have in other sectors to establish effective regulation quickly, because everything is so fast-paced. However, what we can do is put a clear, legal responsibility onto the board and CEOs of these AI companies, so that the companies are regulating their products at all times. Looking at Italy, a full ban is not the way forward, it’s extreme. Yes, some type of regulation needs to be put in place for safeguarding, but the onus should lie with the companies, and this safeguarding needs to be tracked to ensure they don’t run into problems with the software.”
HedgeThink.com is the fund industry’s leading news, research and analysis source for individual and institutional accredited investors and professionals