SolarWinds: IT professionals want stronger AI regulation
A new survey from SolarWinds has unveiled a resounding call for increased government oversight of AI, with 88% of IT professionals advocating for stronger regulation.
The study, which polled nearly 700 IT experts, highlights security as the paramount concern. An overwhelming 72% of respondents emphasised the critical need for measures to secure infrastructure. Privacy follows closely behind, with 64% of IT professionals urging for more robust rules to protect sensitive information.
Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, commented: “It is understandable that IT leaders are approaching AI with caution. As technology rapidly evolves, it naturally presents challenges typical of any emerging innovation.
“Security and privacy remain at the forefront, with ongoing scrutiny by regulatory bodies. However, it is incumbent upon organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts. This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”
The survey’s findings come at a pivotal moment, coinciding with the implementation of the EU’s AI Act. In the UK, the new Labour government recently proposed its own AI legislation during the latest King’s speech, signalling a growing recognition of the need for regulatory frameworks. In the US, the California State Assembly passed a controversial AI safety bill last month.
Beyond security and privacy, the survey reveals a broader spectrum of concerns amongst IT professionals. A majority (55%) believe government intervention is crucial to stem the tide of AI-generated misinformation. Additionally, half of the respondents support regulations aimed at ensuring transparency and ethical practices in AI development.
Challenges extend beyond AI regulation
However, the challenges facing AI adoption extend beyond regulatory concerns. The survey uncovers a troubling lack of trust in data quality—a cornerstone of successful AI implementation.
Only 38% of respondents consider themselves ‘very trusting’ of the data quality and training used in AI systems. This scepticism is not unfounded, as 40% of IT leaders who have encountered issues with AI attribute these problems to algorithmic errors stemming from insufficient or biased data.
Consequently, data quality emerges as the second most significant barrier to AI adoption (16%), trailing only behind security and privacy risks. This finding underscores the critical importance of robust, unbiased datasets in driving AI success.
“High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes,” adds Johnson. “Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.”
The survey also sheds light on widespread concerns about database readiness. Less than half (43%) of IT professionals express confidence in their company’s ability to meet the increasing data demands of AI. This lack of preparedness is further exacerbated by the perception that organisations are not moving swiftly enough to implement AI, with 46% of respondents citing ongoing data quality challenges as a contributing factor.
As AI continues to reshape the technological landscape, the findings of this SolarWinds survey serve as a clarion call for both stronger regulation and improved data practices. The message from IT professionals is clear: while AI holds immense promise, its successful integration hinges on addressing critical concerns around security, privacy, and data quality.