KLR AB

Focus
A.I.
Change
Strategy & Leadership

Democratization of IT — as AI moves into management

AI has gone from technical curiosity to topping the agenda in boardrooms in a short time. Despite headlines about “new industrial revolution,” we're not here to spread the hype -- quite the opposite. In a sober but far-sighted spirit, it can be stated that AI is already rewriting the rules of business logic. Companies that have traditionally been stable see their business models challenged by algorithms and self-learning systems. In short: it is no longer an issue if AI will affect business, without how.

For today's business leaders, it is therefore a matter of understanding the technology in depth in order to be able to guide development — not just letting the IT department keep the wheel. Sure, AI's rampage may seem dizzying, but sticking your head in the sand is about as effective as ignoring the elephant in the server room. Business logic is changing at a supersonic pace, and far-sighted organizations are already preparing. This review comes at the right time—to provide a credible, strategic look at where AI is taking us next, without falling into the limp and with the twinkle in the eye.

Read more on a larger screen

The Elephant in the Server Room — AI's Ethical and Regulatory Challenges

When AI is put in every man's hand, difficult questions arise: Who is responsible if something goes wrong? How do we avoid addiction? (“What could possibly go wrong?” , as an ironic voice in his head whispers). For the AI revolution to truly benefit businesses, trust is required — and trust is built through ethics, transparency and accountability. Without a clear framework, IT democratization risks becoming pure Wild West.

Fortunately, lawmakers and organizations have begun to act. The new EU's AI Act sets the bar high globally. It divides AI systems into different levels of risk: unacceptable risk (banned entirely, e.g., social scoring systems), high risk (allowed only under strict requirements), limited risk (allowed with some transparency requirements), and minimal risk (freely used, e.g. AI in games or spam filters). For high-risk AI, extensive requirements are imposed on documentation, quality control and human review. Even general AI models such as ChatGPT are covered: providers must, among other things, publish a summary of which training data the model has been trained on.

A concrete example of transparency requirements is the labeling of AI-generated content. According to the AI Regulation, AI systems that create or manipulate content must clearly inform the recipient that the material is artificially produced. Deepfakes and other synthetic media must be watermarked or metadata to prevent misleading information. Similarly, explicability is also demanded — AI systems need to be able to account for how they arrive at their decisions, at least when used in critical decision-making processes.

The question of responsibility is at least as central. Who is held accountable if an AI causes harm? Both developers and users are expected to take their responsibilities under the new rules. Swedish initiatives emphasize “legality and transparency”: all AI solutions must remain within the legal framework and stakeholders should be held accountable for their use. Equally important is ethics: AI must be designed to respect human rights and counter bias and discrimination. It requires clear guidelines and ongoing controls -- from how training data is collected to how algorithms are tested for unfair effects.

Managing AI risks is an ongoing process. Companies are now setting up AI ethics councils and beefing up their risk analysis procedures, while the EU prepares regulators to monitor compliance. Globally, consensus is growing: organisations such as the EU, OECD and UNESCO have all developed guidelines for reliable and responsible AI that overlap in many respects. We can expect “AI audits” and certifications to become commonplace in the future, just as ISO standards are in information security.

As the icing on the cake, they also start talking about letting AI help keep track of AI — fight fire with fire, so to speak. Ironically, perhaps technology's own precision is needed to tame its wild rampage, so that we can reap the fruits of AI without thumbing at either innovation or integrity.